in reply to Re^3: Algorithm advice sought for seaching through GB's of text (email) files
in thread Algorithm advice sought for seaching through GB's of text (email) files

(The disk spins at the same speed no matter how many readers are waiting for their sector to come under the disk head.)

That's way too simplistic.

The rotation of the disk is not the lmiting factor, it is head movement. With one process moving serially through 300,000 files, successive reads by a single process can readily expect to continue reading from the same track. With multiple processes reading from different files, each context switch (overhead) is going to also cause the additional (high cost) overhead of a head move.

Additionally, with small files (emails), when the process calls for a 512 byte or 4k read, the device driver will likely read the entire file (if it happens to be contiguous on a track), and store it in cache. However, the more processes reading new data from disk, the greater the likelyhood that the cached (but unread) data for one process/file, will need to be discarded to accomodate reads by another process after a context switch, with the result that some data will need to be read multiple times.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re^4: Algorithm advice sought for seaching through GB's of text (email) files

Replies are listed 'Best First'.
Re^5: Algorithm advice sought for seaching through GB's of text (email) files
by tilly (Archbishop) on Sep 24, 2006 at 20:19 UTC
    At some point you are right, contention becomes an issue. However I have found as a practical matter on commodity hardware that that point isn't hit until you have 5-10 parallel processes reading.

    Of course I last looked into this a while ago. The figure is undoubtably different today. Benchmark for your purposes.

    But my point remains. I/O bound jobs are excellent candidates for parallelization. Due to various sources of latency, one process generally comes nowhere close to maxing out I/O. If you don't have to worry about getting in other people's way, it almost always makes sense to have multiple parallel processes.

      Due to various sources of latency, one process generally comes nowhere close to maxing out I/O

      Hmm. See threads and multiple filehandles for the most recent example were this has been shown to not be the case.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        Huh. I have never personally seen anything like that happen, and a few years ago I was doing a lot of I/O bound processing. The things that I can think of that might have been different are that I was working with lots of small files and generally had to be both reading and writing, plus this was a few years ago so there was different hardware and different operating systems.

        I don't know what made the difference, but apparently it is possible for one process to max out I/O. However I still haven't personally seen it happen.