(The disk spins at the same speed no matter how many readers are waiting for their sector to come under the disk head.)
That's way too simplistic.
The rotation of the disk is not the lmiting factor, it is head movement. With one process moving serially through 300,000 files, successive reads by a single process can readily expect to continue reading from the same track. With multiple processes reading from different files, each context switch (overhead) is going to also cause the additional (high cost) overhead of a head move.
Additionally, with small files (emails), when the process calls for a 512 byte or 4k read, the device driver will likely read the entire file (if it happens to be contiguous on a track), and store it in cache. However, the more processes reading new data from disk, the greater the likelyhood that the cached (but unread) data for one process/file, will need to be discarded to accomodate reads by another process after a context switch, with the result that some data will need to be read multiple times.
In reply to Re^4: Algorithm advice sought for seaching through GB's of text (email) files
by BrowserUk
in thread Algorithm advice sought for seaching through GB's of text (email) files
by chargrill
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |