in reply to Writing to many (>1000) files at once

There's probably stuff I'm missing here, but your numbers don't add up.

5000 lines * 132 chars * 1000 files = 630 MB. Yor 16 GB should conservatively give you headroom for 40,000 reports of 5000 lines; or 1000 files of 200,000 lines; or some other combination in between. Assuming that there is no scope for 'sharing' lines between files whilst in memory.

Perhaps the problem is that you are storing each line as a separate keyed value in a hash and that structure is consuming extra memory?

If your perl is built to use PerlIO, then you (I think), should be able to open as many RAM files as you want. As the lines in a ramfile are effectively concatenated into a single scalar, they require less overhead than using an array or hash.

You could then spew (the opposite of slurp) them out to the filesystem one file at a time at the end. That should be more filesystem cache freindly, and less tiresome to code, than juggling 1000's of files through 250 filehandles.

It would make the generation phase very fast. Even the writing should be comparatively quicker as you would only be asking the filesystem to allocate the final space once, rather constantly reallocation the size of many files in rotation.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re: Writing to many (>1000) files at once

Replies are listed 'Best First'.
Re^2: Writing to many (>1000) files at once
by rminner (Chaplain) on Aug 15, 2006 at 10:57 UTC
    I agree, if you can do it in memory, then do it in memory. Also the Standard Perl i/o mechanisms are incredibly slow. I would recommend File::Slurp (supports slurping and spewing). I used it once in a programm that had to modify roughly 1000 files. Compared to standard slurping like:
    { local $/ = undef; my $wholefile = <$FH>; }
    it was 15 times faster (it now takes roughly 3 minutes, while with standard perl mechanisms like the one abough, it took 45 minutes). It might even be that both (Input and Ouput Data) fit into main memory - if they do and if such memory consumption isn't a problem, then simply do it - its much faster.