Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

DBI + 'SELECT *' - Memory Use Galore?

by tadman (Prior)
on Feb 17, 2001 at 01:31 UTC ( [id://58996]=perlquestion: print w/replies, xml ) Need Help??

tadman has asked for the wisdom of the Perl Monks concerning the following question:

Maybe I'm just delusional, but I was under the impression that when using DBI that calling using the execute,prepare,fetch technique meant that the table was retrieved piecewise, as in one row at a time. If you were bold enough to use something like fetchall_arrayref() instead, then you would expect the entire thing to appear in RAM. I haven't been able to find any clarification on the internals of DBI with respect to this particular aspect.

I'm doing something pretty simple on a 9M row table, and yet DBI is chomping up a good 300MB+ of RAM if you should let it run that long:
use DBI; # (Init w. AutoCommit=>1 occurs here) # Prepare the SELECT statement... my ($s) = $db->prepare ("SELECT * FROM table"); # ...and run it. my ($s_rv) = $s->execute(); # Wait patiently, run out of RAM... # Presumably, should be able to fetch data and use it here: while (my @row = $s->fetchrow_array()) { ... }
I'm not making it past the $s->execute() statement, and I don't want to make a trip to RAM Depot® for a bulk purchase.

Replies are listed 'Best First'.
Re: DBI + 'SELECT *' - Memory Use Galore?
by kschwab (Vicar) on Feb 17, 2001 at 03:55 UTC
    This is actually a feature of the mysql code :)

    By default, the client bears the burden of handling result sets.

    You can change this to place the burden on the server via:

    $sth->{"mysql_use_result"} = 1;
    This forces DBD::Mysql to use the mysql internal call mysql_use_result() rather than mysql_store_result().

    There are some references to this:

      It is not surprising that this turned out to be a MySQL thing. I was actually expecting something along those lines, but after reading through the docs, and the FAQs, and what have you, all I could find was a list of people having the same problem, but no solutions were suggested that actually worked.

      Until kschwab's, that is.

      The 'mysql_use_result' flag is a DBD::mysql feature that switches methods. As the driver documentation points out, 'mysql_use_result()' is faster than the default 'mysql_store_result' and it uses less memory, but nothing comes without a price, and in this case, it's pretty stiff. You can't do anything with your database connection until you are finished with your query. It blocks them all with a pseudo-cryptic but insightful error message:      Commands out of sync; You can't run this command now Fortunately, that one is in the docs (20.4.10).

      So, what I have done is created a special DBI database handle for that one query, and one for the others so that I can still INSERT, DELETE and what have you, without waiting for the big 'SELECT *' to finish. At least I don't have to pay per connection!
Re: DBI + 'SELECT *' - Memory Use Galore?
by runrig (Abbot) on Feb 17, 2001 at 02:28 UTC
    I don't see anything wrong with what you are doing, maybe the DBD you are using is leaking or is stupidly written to suck up all of the rows from the select statement (or maybe the db demands that it suck in all the rows, you don't specify what database you are using), but I kind of doubt it.

    You're sure there's nothing inside the loop eating memory? Does this help any?:PERL memory leak?

      I'm just using DBD::mysql, which should be the best thing for the job. Further, the program doesn't even make it into the loop, as the 'print' diagnostic I have in the code doesn't report back. Instead, a HUGE amount of data is sent from the MySQL server over TCP/IP to the Perl process, which happily gobbles it up into RAM.

      ichimunki's idea of the brackets for the my declarations was interesting, but produced no effect. I even removed the error checking from $s->execute() and there was no change.

      The only solution (read: hack) that I came up with is to limit the query to managable chunks:       SELECT * FROM table LIMIT ?,? Then iterate through the entire table in LIMITed chunks. This is not the fix I was hoping for.
(ichimunki) Re: DBI + 'SELECT *' - Memory Use Galore?
by ichimunki (Priest) on Feb 17, 2001 at 02:16 UTC
    My guess would be it has to do with the ()'s around your my $variables. Some quick tests on one liners indicated to me that putting those around straight scalar assignments changed the nature of the assignment.
Re: DBI + 'SELECT *' - Memory Use Galore?
by xeridon (Initiate) on Feb 17, 2001 at 23:59 UTC
    My question would be, why are you trying to get a return value from the execute()? Im not sure if this is a problem or not, because I have never done that before. I do know that it works great when I don't do that. Just my humble opinion.
      A valid reason for getting that return value is so that you can check it and do error processing. (The text of the error is in $dbh->errstr.)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://58996]
Approved by root
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others goofing around in the Monastery: (3)
As of 2024-03-29 01:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found