in reply to Suggestions for optimizing this code...

This assumes that you don't need random access to the lines in the file, but are only using the array for processing the lines, whilst achieving your need to read and write the file as single operations. It also requires 5.8.x.

The following loads, processes and writes a 14MB/1 million line file and consumes a total of 30MB (essential 2x filesize) in 3 seconds.

For comparison, your original code using the array and performing the same (m[ ]) operating on each line consumes 146MB and takes 14 seconds.

my $rec; sysopen(DF, $ARGV[0], O_RDONLY | O_CREAT) or die "$ARGV[0] : $!"; sysread(DF, $rec, -s DF); close DF; open IN, '<', \$rec or die $!; open OUT, '>', \my $out or die $!; seek OUT, length( $rec )-1, 0; print OUT ' '; seek OUT, 0, 0; while( <IN> ) { ## Do stuff to this line in $_; m[ ]; print OUT; } sysopen(DF,"test.txt", O_WRONLY | O_CREAT); syswrite DF, $out; close DF;

What the code does is to open the scalar into which you slurped the file as a in-memory filehandle. It also opens and pre-sizes an output in-memory filehandle. You then read from the in 'file' and write to the out 'file' one line at a time in the normal way and once you've finished, you write the outfile, which is really just a second huge scalar ($out) to the real output file in a single spew.

Avoiding creating the array saves both a substantial amount of memory and time.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.