in reply to Re: Iterating through HUGE FILES
in thread Iterating through HUGE FILES

To add to/support what Animator said, I use the $/=\123 trick regularly at work.  The implementation reads in 2M worth of data, processes it, seeks back 1k and reads in another 2M chunk.  The code seeks back 1k because the processing involves regular expressions and we want to test if a regular expression straddles the 2M boundary.  If you have variable length records, this may not work so well as it is very likely the end of the read in buffer will fall in the middle of a record.