in reply to Re: Memory Efficient Alternatives to Hash of Array
in thread Memory Efficient Alternatives to Hash of Array
Consider 5 GB of data broken up into 50 byte lines. So there are 100 million lines of data. Suppose we want to store that and retrieve it into DBM::Deep. For the sake of argument let's say that each store or retrieve takes one seek to disk. So that's 200 million seeks to disk.
How long does 200 million seeks to disk take? Well suppose that your disk spins at 6000 rpm. (This is typical.) That means it spins 100 times per second. A disk seek will therefore take between 0 and 0.01 seconds, or 0.005 seconds on average. 200 million seeks therefore takes a million seconds. Which is 11.57 days, or a week and a half.
Now how long does sorting that data take? Well let's assume an absurdly slow disk - 10 MB/s. (Real sorting algorithms keep a few passes in RAM and so will need fewer passes.) Suppose we code up a merge-sort and need 30 passes to disk. Each pass needs to read and wrote 5 GB. We therefore have 300 GB of throughput at 10 MB/s which will take 30,000 seconds, or a bit over 8 hours. (If your machine really takes this long to sort this much data, you should upgrade to a machine from this millennium.)
The moral? Hard drives are not like RAM. DBM::Deep and friends are efficient for programmers, but not for performance. If you have existing complex code that needs to scale, consider using them. But it is worth some programmer effort to stay away from them.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: Memory Efficient Alternatives to Hash of Array
by matrixmadhan (Beadle) on Dec 27, 2008 at 17:38 UTC | |
by tilly (Archbishop) on Dec 27, 2008 at 18:24 UTC |