in reply to Writing to many (>1000) files at once
There's probably stuff I'm missing here, but your numbers don't add up.
5000 lines * 132 chars * 1000 files = 630 MB. Yor 16 GB should conservatively give you headroom for 40,000 reports of 5000 lines; or 1000 files of 200,000 lines; or some other combination in between. Assuming that there is no scope for 'sharing' lines between files whilst in memory.
Perhaps the problem is that you are storing each line as a separate keyed value in a hash and that structure is consuming extra memory?
If your perl is built to use PerlIO, then you (I think), should be able to open as many RAM files as you want. As the lines in a ramfile are effectively concatenated into a single scalar, they require less overhead than using an array or hash.
You could then spew (the opposite of slurp) them out to the filesystem one file at a time at the end. That should be more filesystem cache freindly, and less tiresome to code, than juggling 1000's of files through 250 filehandles.
It would make the generation phase very fast. Even the writing should be comparatively quicker as you would only be asking the filesystem to allocate the final space once, rather constantly reallocation the size of many files in rotation.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Writing to many (>1000) files at once
by rminner (Chaplain) on Aug 15, 2006 at 10:57 UTC |