in reply to Iterating through HUGE FILES

Well, are there newlines in the file? (or to be correct: is the value of the input record seperator somewhere in the file?)

If there aren't then that's your problem. What can you do then is either set the input record seperator ($/) to another charachter, or set it to an integer-reference, then X bytes will be read. (for example $/=\123; now <OUT> will read 123 bytes each time)

Replies are listed 'Best First'.
Re^2: Iterating through HUGE FILES
by northwind (Hermit) on May 10, 2005 at 17:53 UTC

    To add to/support what Animator said, I use the $/=\123 trick regularly at work.  The implementation reads in 2M worth of data, processes it, seeks back 1k and reads in another 2M chunk.  The code seeks back 1k because the processing involves regular expressions and we want to test if a regular expression straddles the 2M boundary.  If you have variable length records, this may not work so well as it is very likely the end of the read in buffer will fall in the middle of a record.