in reply to File reading efficiency and other surly remarks
I tend to process most files line-by-line if they have line-based data. If they have information that can span lines (or records), I slurp up the whole thing. Depending on file size, available memory, and other processes, slurping isn't a good idea.
In this case, line by line seems like it would be more efficient. You can stop reading when you hit the record you want. (Of course, if you'll be doing this sort of thing often, I'd put everything in a database or at least a tied hash, and let something besides Perl handle the searching -- probably a little faster.)
|
|---|