in reply to Re: Handling large amounts of data in a perl script
in thread Handling large amounts of data in a perl script

I'll be working with about 100,000 directories each containing several subdirectories that each contain a few thousand files or fewer files. The calculations themselves will be simple.
  • Comment on Re^2: Handling large amounts of data in a perl script

Replies are listed 'Best First'.
Re^3: Handling large amounts of data in a perl script
by ww (Archbishop) on Jan 08, 2014 at 19:47 UTC
    How is the large number of dirs and files relevant to "making each person an object" or to educated_foo's observation and question?
    Come, let us reason together: Spirit of the Monastery
Re^3: Handling large amounts of data in a perl script
by educated_foo (Vicar) on Jan 09, 2014 at 02:48 UTC
    Okay, so you're dealing with about 10^5 * 10^3 * 10 items, i.e. about a giga-item. On a sufficiently powerful machine, you can fit them all in memory at once if they're just integers (~4GB or ~8GB). If they're not, say if you make them "objects," they won't fit comfortably.

    Now you have to ask yourself whether you need to process them all at once or sequentially. If you have to process them all at once (e.g. sorting), you'll have to do something clever. Otherwise (e.g. finding the mean), you can just run through them one at a time, updating some state in your program, e.g.

    while ($age = next_age()) { $ages += $age; $n++; } print "average = ", $ages / $n, "\n";