in reply to Writing to many (>1000) files at once

How big are the individual files? Are the files unique? Do the files all reside on the same media?

Multiple processors probably aren't going to help a huge amount because getting out to the disk drives is likely the bottle neck. There's not much point having more file handles open than the system can physically write from simultaniously.


DWIM is Perl's answer to Gödel
  • Comment on Re: Writing to many (>1000) files at once

Replies are listed 'Best First'.
Re^2: Writing to many (>1000) files at once
by suaveant (Parson) on Aug 15, 2006 at 03:05 UTC
    It is financial data... each file is unique and even lines that have the same key may pull different items from the record, though the output is all based on the same thing.

    Files can be anything from a few bytes to a couple megs, really... all depends on how many securities they ask for.

                    - Ant
                    - Some of my best work - (1 2 3)

      It doesn't sound like trying to "write" more than a file at a time gains anything from an I/O efficiency point of view then. You may get a gain in code organisation, but there are probably other ways to achieve that. Can you sketch the code structure and the structure of files on disk? (Not file contents, just how directories hang together and such.)

      How do the files get out to your customers? Is that the potential bottleneck?


      DWIM is Perl's answer to Gödel