in reply to character-by-character in a huge file

It makes no sense to read a large file a byte at a time, on a system where you use perl. 3GB is a bit much to hold in memory at once, at least it is for me, but a buffer of a few k to a few tens of k should work really well. Also, there's no need for the two handles... use the same buffer for both.

My idea is to fill the buffer, look for the start character, and if you find it too close to the end of the block, read some more and append it to the buffer. read() even supports this operation out of the box.

Sample code (not tested well):

my $windowsize = 500; use constant BLOCKLENGTH => 4096; my $buffer = ""; my $offset = 0; while(my $r = read FH, $buffer, BLOCKLENGTH) { my $i = -1; until(($i = index($buffer, "x", ++$i)) < 0) { printf "found 'x' at %d+%d\n", $offset, $i; if($i+$windowsize > length $buffer) { # get rid of what we no longer need, or we might end up wi +th a buffer holding the whole huge file: $offset += $i; $buffer = substr $buffer, $i; $i = 0; # append a new buffer (assuming BLOCKLENGTH >= $windowsize +): $r = read FH, $buffer, BLOCKLENGTH, length $buffer; last if $windowsize > length $buffer; #not long enough } # do something here... printf "offset for 'x' is %d, found '%s' at %d\n", $offset + $i, substr($buffer, $i + $windowsize - 1, 1), $offset+$i+$windowsize-1; } } continue { $offset += length $buffer; }

Replies are listed 'Best First'.
Re: Re: character-by-character in a huge file
by ambrus (Abbot) on Apr 10, 2004 at 10:15 UTC

    One character each time, why not? Perl already buffers the file so getc should be faster than a lot of magic with substr (or unpack).

      It depends on the frequency of the matches. I'm convinced that one call to index() on a string of 10k, with a negative result, or just a few matches, is a lot faster than 10000 calls to getc() and the same number of eq tests. It's a matter of doing the same task in C, or in Perl. C usually wins hands down.