in reply to Improving the efficiency of code when processed against large amount of data

another thing that i found sped my code up for larger files was to avoid slurping and go line by line
foreach (<FH>){ print $_; }
This should work.
  • Comment on Re: Improving the efficiency of code when processed against large amount of data
  • Download Code

Replies are listed 'Best First'.
Re^2: Improving the efficiency of code when processed against large amount of data
by chromatic (Archbishop) on Nov 09, 2006 at 08:16 UTC

    Only while loops avoid slurping though! for loops slurp.

Re^2: Improving the efficiency of code when processed against large amount of data
by aufflick (Deacon) on Nov 09, 2006 at 06:00 UTC
    Slurping all the files before processing means they are all in ram. If you are light on ram that means you're swapping the data in, then out, then back into memory again.

    Line by line (or at least file by file) is usually the way to go for large datasets.

      good explanation thanks!