DrDoogie has asked for the wisdom of the Perl Monks concerning the following question:

Hi all! I have a large datastructure, (a hoh), with ~10K * ~400 keys. Currently it takes several minutes assigning this to memory, is there some way to optimise it? I can give specific code-examples, but I'd just like to "test the waters" with a general question first. I'm under the impression that SharedCache is the only way to go when you need to share data between processes.
  • Comment on Optimizing assignment into IPC::SharedCache?

Replies are listed 'Best First'.
Re: Optimizing assignment into IPC::SharedCache?
by perrin (Chancellor) on Feb 05, 2004 at 03:10 UTC
    IPC::SharedCache is actually one of the slowest modules of its kind. A MySQL database is many times faster, and BerkeleyDB, IPC::MM, and Cache::FastMmap are all faster than that.
      And then MySQL can use BerkeleyDB so you get the benefits of both!
        Well, no. MySQL using BerkeleyDB is much slower than using BerkeleyDB directly.
Re: Optimizing assignment into IPC::SharedCache?
by Fletch (Bishop) on Feb 05, 2004 at 01:20 UTC

    TMTOWTDI. If you've got that much data, using an RDBMs or a DB file might be much more efficient than copying that much data in and out of shared memory. Also look into Cache::Cache.