dbritt has asked for the wisdom of the Perl Monks concerning the following question:

Hi All,

This one has been bugging me for a while now.

I have a hash with about 50,000 entries that I write to a SQL database periodically. I can then run SQL queries on the database to return information such as the number of unique entries and various totals etc. Basically the SQL interface is very powerful and lets me 'slice' the data in numerous different ways.

Problem is it is also very slow (I need to run up to 300 queries every 5 minutes). Given that I already have the Hash stored in memory in my application, is there a simple way to run the queries directly against the hash rather than loading up the database.

I suppose what I am after is a DBI interface to good old Perl Hash arrays. Any ideas?

Thanks,

Dave.

Replies are listed 'Best First'.
Re: Querying a Hash like a database
by Errto (Vicar) on Sep 05, 2005 at 05:52 UTC
    DBD::AnyData supports this. I haven't used it, but it appears to be as simple as passing it a reference to an array of rows (look at the section on "in-memory tables"). I've seen a number of Monks recommend this module and there is a PM review of it online as well.
Re: Querying a Hash like a database
by TedPride (Priest) on Sep 05, 2005 at 05:59 UTC
    A query per second on average should be ridiculously easy with a data set of only 50,000 records. Are you using indexes for the high volume, low return selects?
Re: Querying a Hash like a database
by calin (Deacon) on Sep 05, 2005 at 09:28 UTC

    Note that a hash doesn't have the built-in complex query optimisations of a real database engine, so even if you manage to find a SQL wrapper that supports advanced SQL stuff - like joins, aggregates, advanced indexing (beyond "exact match"), range queries etc. - it doesn't mean it will run fast.

    Maybe you should take a look at DBD::SQLite

    Update: I second TedPride's comment - you should review your indexes. Many database performance problems are due to improper indexing.