in reply to Re: Improving the efficiency of code when processed against large amount of data
in thread Improving the efficiency of code when processed against large amount of data

Slurping all the files before processing means they are all in ram. If you are light on ram that means you're swapping the data in, then out, then back into memory again.

Line by line (or at least file by file) is usually the way to go for large datasets.

  • Comment on Re^2: Improving the efficiency of code when processed against large amount of data

Replies are listed 'Best First'.
Re^3: Improving the efficiency of code when processed against large amount of data
by zer (Deacon) on Nov 09, 2006 at 06:17 UTC
    good explanation thanks!