in reply to Fastest I/O possible?

This is one of those situations where if I could save a miniscule amount of time per record, it could potentially shave a half an hour off of the run time of these monster processing jobs.

If you're running this off of Win32, you can save noticeable time by periodically defragging your drives.

Regardless of the OS, you can save substantial time if the OUTPUT file you're writing is on a different physical drive than the datafiles you're reading. I takes a lot of time (relativly speaking) to move disk heads across the disk to do "read a bit here, write a bit there" operations. If you can rig things so that drive heads move relatively small amounts (e.g., from track to track) while reading or writing, you can win big.

If you have to run everything off of one drive, then consider buffering your writes to OUTPUT. Perl's buffering will wait until a disk block is full before writing, but you can increase the effective buffer size by doing something like the following in your loop.

push @buffer, join("|", @fields) . "\n"; $buffer .= "\n"; if ( --$fuse == 0 ) { print OUTPUT @buffer; @buffer = (); $fuse = $LINES_TO_BUFFER; }
Set $LINES_TO_BUFFER to something pretty big (10000 might be a good starting point), and be sure to empty the buffer at the end of the loop.