Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

Re^2: Slow script: DBI, MySQL, or just too much data?

by menolly (Hermit)
on Apr 15, 2005 at 18:20 UTC ( [id://448295] : note . print w/replies, xml ) Need Help??


in reply to Re: Slow script: DBI, MySQL, or just too much data?
in thread Slow script: DBI, MySQL, or just too much data?

Yes, there are some simple joins already, but I'd have to replace, for example:
delete from testDBRepCats where repCatID = ?
with something like:
delete r.* from testDBRepCats r, testDBToMCQDB m, testDB t, clients c +where r.repCatID = m.repCatID and m.testDigest = t.testDigest and t.c +lientID = c.clientID and c.drone != '$drone'
(not checked for validity)
except that's much more complex to chunk (for time-out purposes), because just adding limit 1000 won't maintain data integrity the way the current approach does -- I'm processing 1000 tests at a time, but all the data for those tests is removed before I check time and move to the next chunk, so I'm left with something like:
delete r.* from testDBRepCats r, testDBToMCQDB m, testDB t, clients c +where r.repCatID = m.repCatID and m.testDigest = t.testDigest and t.t +estDigest = ?
passed to execute_array(), which may or may not be any gain over the current approach, since it's a more complex statement, involving a lot more tables.