in reply to Managing a graph with large number of nodes

What does a node look like?

If performance is important, before you consider paying the penalties of constructing, updating and traversing a disk-based graph, consider using a more memory efficient data structure.

If you store 10 million of the following simple nodes as:

  1. Array of hashes:
    >perl -E"$a[$_] = {a=>1,b=>3.2,c=>'a short string'} for 1..10e6; <>"

    It requires ~3.9 GB.

  2. Array of arrays:
    >perl -E"$a[$_] = [1,3.2,'a short string'] for 1..10e6; <>"

    ~2.8 GB.

    As an array of simple (or packed) strings:

    >perl -E" $a[$_] = qq[1,3.2,'a short string'] for 1..10e6; <>" >perl -E" $a[$_] = pack 'VdA8', 1,3.2,'a short string' for 1..10e6; <> +"

    Only 1.1 GB.

By saving 75% of the memory requirement, you may be able to fit all your data in to ram comfortably.

It means that you need to split or unpack and join or repack each time you access or modify a node, but that will be far faster than doing disk IO. You can easily hide the details of the splitting and joining behind a tied array or an OO class. And you can even add a simple caching mechanism to avoid re-splitting in a read-update sequence.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
RIP PCW It is as I've been saying!(Audio until 20090817)