in reply to How to make my DBD::CSV DB code faster.

If performance is an issue, I'd tend to agree with the first reply: maybe it's time to start looking at a real database engine.

If you don't think you're ready for that (... aw c'mon! why not? well, anyway...) then you should consider changing from this approach:

foreach row (id,hid,instance,etc...) in table1 lookup row in table2 that contains id,hid lookup a row in table3 that contains id,hid,instance
to something like this:
load all of table2 into a hash, keyed by "id,hid" load all of table3 into a hash, keyed by "id,hid,instance" foreach row (id,hid,instance,etc...) in table1 do something with row data, $table2{"id,hid"} and $table3{"id,hid,i +nstance"}
Since you're using DBD:CVS instead a real database server, I'm assuming that the amounts of data in table2 and table3 are small enough to fit comfortably in memory. If that's not true, then you really should think seriously about mysql or postgres.

Replies are listed 'Best First'.
Re^2: How to make my DBD::CSV DB code faster.
by jZed (Prior) on Sep 30, 2005 at 02:47 UTC
    Your second approach is pretty much how SQL::Statement handles joins, it loads each table only once and then searches in-memory hashes.