in reply to speeding up row by row lookup in a large db

If you were doing INSERTs or UPDDATEs, I'd tell you to use transactions rather than auto-commit, but I only see SELECTs. Another option, since you have tons of RAM, is to load the whole thing into hashes in memory and do all the lookups in perl. If you don't want to do that, I'd suggest switching to MySQL. SQLite is incredibly convenient, but it's not as fast.
  • Comment on Re: speeding up row by row lookup in a large db

Replies are listed 'Best First'.
Re^2: speeding up row by row lookup in a large db
by punkish (Priest) on Mar 21, 2009 at 18:22 UTC
    How do I load the entire data set into memory? The denormalized data set is a few hundred GB (at least, as far as the BLOBs approach suggests), and, isn't Perl (or any process on a 32 bit computer) restricted to addressing 2 GB RAM? I don't really understand that very well, so if you have a strategy I can use, I am all ears.

    Many thanks,

    Update: Adding this note to explain more in response to perrin's note. In my original post above I have shown only the SELECTs and mentioned that each SELECT takes about 33 ms. To try and reduce that, I tried converting the result of each select to a BLOB. Since each result is a ref to an array of arrays, I serialized it using Storable with the help of Data::Serializer and INSERTed it into the blobs table. This was stupendously slow. I tried with both transactions (experimented with committing every 10, 100 and 1000 INSERTs) and without transactions. Besides the fact that each BLOB becomes 430 KB, which would result in a db much larger than my laptop's drive if run to completion, fortunately for my laptop, the darn process ran overnight and had done only about 30,000 or so INSERTs.

    --

    when small people start casting long shadows, it is time to go to bed

      Well, in your question you said the whole database was 430MB, so you can see why I would suggest loading it into RAM. Perl should be able to access more than 2GB RAM on a 64-bit machine, and to some extent on 32-bit one if you have the right Linux kernel.

      INSERTs will definitely run faster if you only commit every 1000. There may be some other SQLite tuning tricks, which you'd probably find on a mailing list or wiki devoted to SQLite. But if none of those work for you, I think MySQL is your best bet.

Re^2: speeding up row by row lookup in a large db
by Ish (Acolyte) on Mar 21, 2009 at 19:06 UTC
    Apologies - that's what comes of replying while watching the rugby....