Dude has asked for the wisdom of the Perl Monks concerning the following question:

Hello all, I'm fetching data from a DBI query and get this error. When I change the query to pull less data, I don't get this error. Also, the query runs OK while pulling the larger data set when I run it from Oracle's SQL*Plus. So this indicates to me that the problem lies in either my Perl code or the configuration of Perl on the Box (AIX 4.3.3) and Perl 5.005_02 built for AIX. Any help or suggestions would be greatly appreciated. Thanks

Replies are listed 'Best First'.
Re: Out of memory! message
by tadman (Prior) on Apr 17, 2001 at 19:20 UTC
    "When I change the query to pull less data, I don't get this error."

    Sounds like the problem I was having earlier where my program would grow to insane sizes even though I thought I was fetching individual rows. The truth is, I was downloading the entire query into some DBI driver loading zone, and was fetching rows from there.

    Check with your DBI driver to see if you can switch it to an incremental method. kschwab was helpful in pointing out how do to this in MySQL.
Re: Out of memory! message
by THRAK (Monk) on Apr 17, 2001 at 19:16 UTC
    Your explaination/lack of code example is rather vague. It sounds as if you are probably pulling a very large query into an array and/or looping over it in a while loop. If the data set is really large you are going to run out of memory. Therefore you may need to refine your query to a stricter criteria and then run that multiple times so you can digest it in smaller chunks.

    -THRAK
    www.polarlava.com
      Thanks THRAK. Sorry about the lack of code example. I'm doing what you suspect -> looping over a fetch from the statement handle and putting the data in a hash array. I had a feeling I might have to break this up into smaller queries.
        Big queries are fine.
        However, if you're doing this, check out perltie.
        This should cure many evils associated with hash sizes taking you out of memory.

        Malk.
Re: Out of memory! message
by arturo (Vicar) on Apr 17, 2001 at 19:21 UTC

    It's probably not a problem with your Perl installation. I'd wager that what's going on is that you're pulling all of the data into a big Perl data structure, like an array of array references. That would be the case if you're using fetchall_arrayref to get your data. But this cannot be determined unless you post your code.

    Going with my guess: the problem is probably that Perl takes relatively more memory to store a given amount of data than SQL*Plus, but that's in the nature of the Perl beast => throw memory at a problem to increase efficiency.

    A possble solution is to change your code to work with fetchrow_arrayref (note: "row", not "all") which gets the data from the result set one row at a time, instead of snarfing the whole thing into memory at once. That may reduce the program's memory needs. It might slow down your program's execution -- but having it finish in 10 minutes is faster than never, I suppose =)

    Philosophy can be made out of anything. Or less -- Jerry A. Fodor

      Here is my code:
      $sth = $dbh->prepare($select); $sth->execute(); while(@data = $sth->fetchrow_array()) { $curpidm = $data[0]; $curcrn = $data[8]; unless($seen_pidm{$curpidm}) { $seen_pidm{$curpidm} = 1; push(@pidms, $curpidm); $main_data{$curpidm}{"Resd"} = $data[1]; $main_data{$curpidm}{"Edlv"} = $data[2]; $main_data{$curpidm}{"Styp"} = $data[3]; $main_data{$curpidm}{"Degc"} = $data[4]; $main_data{$curpidm}{"Majr"} = $data[5]; $main_data{$curpidm}{"Egol"} = $data[6]; $main_data{$curpidm}{"Aprv"} = $data[7]; $main_data{$curpidm}{"TermCodeEff"} = $data[14]; $main_data{$curpidm}{"Cipc"} = $data[15]; } $main_data{$curpidm}{$curcrn}{"Subj"} = $data[9]; $main_data{$curpidm}{$curcrn}{"Crs"} = $data[10]; $main_data{$curpidm}{$curcrn}{"Section"} = $data[11]; $main_data{$curpidm}{$curcrn}{"Sess"} = $data[12]; $main_data{$curpidm}{$curcrn}{"CreditHr"} = $data[13]; $main_data{$curpidm}{"CurHrs"} += $data[13]; unless($seen_crn{$curcrn}) { $seen_crn{$curcrn} = 1; push(@crns, $curcrn); } } $sth->finish();
        Untested, but let's go for ease of programming here. The suggestion to use MLDBM is very good, as the error is probably due to storing so much in %main_data.
        while(@data = $sth->fetchrow_array()) { ($curpidm, $curcrn) = @data[0, 8]; my $dest = $main_data{$curpidm}; unless($seen_pidm{$curpidm}) { $seen_pidm{$curpidm} = 1; push(@pidms, $curpidm); @$data{qw( Resd Edlv Styp Degc Majr Egol Aprv TermCodeEff Cipc + )} = @data[1 .. 7, 14, 15]; } my $crn = $dest->{$curcrn}; @$crn{ qw( Subj Crs Section Sess CreditHr )} = @data[9 .. 13]; $dest->{CurHrs} += $data[13]; unless($seen_crn{$curcrn}) { $seen_crn{$curcrn} = 1; push(@crns, $curcrn); } } # probably unnecessary # $sth->finish();
        If that's not correct syntax, it's pretty close. At least it's shorter.

        The villain here is probably the %main_data hash, which is filling up with all your fetched data. So it's not when you get it from the DB, it's when you squirrel it away for use in your program.

        The minimal mutilation solution is to do what OS's do to extend their memory: use the hard drive. You can use the MLDBM module which allows you to store complex Perl data structures on disk and manipulate them transparently (when you tie the data structure).

        Not completely related to that suggestion: something that might help you make this stretch of code more maintainable: use a hash slice and an array slice instead of those multiple assignments:

        @fields = qw(Resd Edlv); # just a sample @main_data{curpidm}{@fields} = @data[1,2];

        Update gotta check that, not sure it will work as advertised ...

        Second update it won't ... the handy @hashname shorthand is short-circuited by the fact that we've got a reference here. So, unless somebody else figures out the syntax, disregard this suggestion.

        HTH

        Philosophy can be made out of anything. Or less -- Jerry A. Fodor