One thing you could try is to replace the string Perl uses to identify the ends of lines (input record separator) with one of the tags in your file.
{ local $/ = '<tag>'; while (<>) { # do stuff } }
Or you could just feed a while loop chunks of data at a time using a trick described in perldoc perlvar:
Setting $/ to a reference to an integer, scalar containing an integer, or scalar that's convertible to an integer will attempt to read records instead of lines, with the maximum record size being the referenced integer. So this: local $/ = \32768; # or \"32768", or \$var_containing_32768 open my $fh, $myfile or die $!; local $_ = <$fh>; will read a record of no more than 32768 bytes from FILE. If you're not reading from a record-oriented file (or your OS doesn't have record-oriented files), then you'll likely get a full chunk of data with every read. If a record is larger than the record size you've set, you'll get the record back in pieces.
Update: Commented out irrelevant text from perlvar.
--
Allolex
In reply to Re: search/replace very large file w/o linebreaks
by allolex
in thread search/replace very large file w/o linebreaks
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |