in reply to Re^2: Memory Efficient Alternatives to Hash of Array
in thread Memory Efficient Alternatives to Hash of Array

That's a pretty detailed explanation.

But am not very sure about the conclusion you have stated.

Based on your comment, it seems sorting ( whatever be the size of the dataset ) is going to take much lesser time compared to other methods like DBM::Deep

So, what exactly is the demarcating line between when to use 'sorting' and when to use DBM::Deep (for example)?

Would you mind elaborating on that? Thanks
  • Comment on Re^3: Memory Efficient Alternatives to Hash of Array

Replies are listed 'Best First'.
Re^4: Memory Efficient Alternatives to Hash of Array
by tilly (Archbishop) on Dec 27, 2008 at 18:24 UTC
    The conclusion is correct. If you have a large data set living on disk, sorting is orders of magnitude more efficient. Furthermore on most commodity hardware you can't use DBM::Deep for a dataset this size because DBM::Deep is limited to a 4 GB filesize unless you are using a 64-bit Perl and you turn on the right options. But there are still many use cases for DBM::Deep.

    The most important is when you have existing code and a data set that is just a little bit too big to handle in RAM. You don't want to rewrite your code, so you use DBM::Deep and it will work, if slowly.

    A second case is when you have a pre-built data structure that you need to access. For instance you have a local index that you look things up in when serving a web page. Sure, building it is slow. But a typical web request is going to just do a lookup, which will be plenty fast. As long as you are grabbing a small amount of data each time, it will be quick.

    But as cool as it is, it has limitations due to the physical limitations of machines, and you sometimes need to be aware of them.