In fact, if your statement handles need not overlap, you could re-use your $sth lexical for new statements, too.
That's the wrong kind of efficiency. Scalars are cheap. Execution plans are expensive. It would be better to connect to the database (there appears to be only one), prepare all the different queries into separate statement handles (possibly in an array), and then sit in a loop forever and spin on them (with a sleep at the end of the loop), as it looks like the code is monitoring a flow, rather than a single one-off affair.
A lot of the queries look very similar. I would be tempted to union them all together to avoid thunking between to the database and Perl too much. Get it all in one hit, and then pull it apart in Perl-space afterwards.
Instead of trimming the trailing blanks off in Perl, format the data correctly in the first place in the select so you don't have to fiddle with it afterwards.
Finally, are these statements returning more than one row? If they are returning only a single row, there are better ways of processing such result sets other than a while/fetchrow_hashref (which is just about the worst way of processing a result set).
• another intruder with the mooring in the heart of the Perl
In reply to Re^2: Efficiency on Perl Code!
by grinder
in thread Efficiency on Perl Code!
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |