in reply to Processing large files many times over

A couple of possibilities occur to me. First, depending on you anout of ram you have and the other processes running, reading whole files in may consume enough RAM to start swapping, which will really slow things down. So it may be best to process one file at a time and read line by line, constructing your array as you go along, rather than splicing the whole array representing your file.

Second, the first regexp could be written as
next unless $curline =~ /([05])\s*$/;
eliminating a reverse operation on each line.

-Mark

Replies are listed 'Best First'.
Re: Re: Processing large files many times over
by dimmesdale (Friar) on Jun 24, 2002 at 19:26 UTC
    I know that 'thank you' replies aren't much welcomed (I seem to remember a few nasty comments a while back), but I have to THANK YOU tremendously. It's the computer's RAM! I can't believe I didn't think of it (well, actually, it's not that hard to think I could have missed it). The files are taking only 5 seconds each (thus far). You saved me from (more) countless headaches! It seemed the slower my programs were going the faster I was getting depressed.