in reply to Large file data extraction

Erm, you never read anything other than the first "record" (and your delimiter of <hr>\r looks suspicious; you probably mean "<hr>\n" instead which might explain why it's slurping the entire file in if the delimiter never actually matches), and then you're constantly iterate searching through that same record text. So long as there's a single match you're going to be stuck in an infinite loop looking at the same data every time (at least as the amended code reads; all bets are off given the truncated code sample).

You really want something more along the lines of (presuming this really is your delimeter) the normal idiom for searching through a file:

local $/ = "<hr>\n"; while( my $line = <> ) { ## process results from $line ... }

(That aside, given this looks to be some sort of HTML you may be better off if it's sufficiently XML-y enough using one of the stream capable XML parsers (for instance XML::Twig will work this way; see the section "Processing an XML document chunk by chunk") than trying to rip things apart with regexen.)

Update: Duur, quite right. Completely missed the /g modifier.

The cake is a lie.
The cake is a lie.
The cake is a lie.

Replies are listed 'Best First'.
Re^2: Large file data extraction
by GrandFather (Saint) on Aug 12, 2008 at 00:36 UTC
    So long as there's a single match you're going to be stuck in an infinite loop

    Actually no. There is a g modifier on the regex. The while loop only iterates as long as there is another match. Consider:

    print "$1\n" while '1234X5678X' =~ m/([^X]+)X/g;

    Prints:

    1234 5678

    Perl reduces RSI - it saves typing