in reply to Perl Read-Ahead I/O Buffering

Firstly I'd make sure that you are really limited by the read speed. Only then it makes sense to optimize. I've no idea where the 160 GB data is going to after munging but this (database?) might also be your bottleneck. Profiling the application should be your first step. The slurp approach with a manageable file size (approx. RAM size) should be close to the optimum. Even if you can't use it in the finished program it makes sense to benchmark against line by line.

The line by line approach has a single loop with all the round trip times (from disk to destination). Cutting that into multiple processes/threads makes better use of resources.

buffer < infile | your_app.pl
effectively answers your initial question. http://search.cpan.org/src/TIMB/DBI_AdvancedTalk_2004/index.htm is might be a first read for output bottlenecks.