in reply to Re: Massive Perl Memory Leak
in thread Massive Perl Memory Leak

But if you'd posted the code 3 days ago as asked, your problem would probably be fixed by now.
Oh? :P The problem with that is that the code is over 100K and there's really nothing wrong with my code, I mean I've been staring at it for weeks and I have been doing this for many years.

I think I've found something with my rebuild effort. A code block that when commented out, the script doesn't leak. I really defy anyone to find anything syntacticly wrong with this code. :)

## all in ifpoll local %routes = (); my $result6; ## device can have CIDR or non-CIDR routing $result6 = $$session->get_entries(-columns => [$ipCidrRouteIfIndex, $i +pCidrRouteProto, $ipCidrRouteType ]); $result6 = $$session->get_entries(-columns => [$ipRouteNextHop, $ipRou +teIfIndex, $ipRouteMask, $ipRouteProto, $ipRouteType]) unless %$resul +t6; if (!defined($result6)) { printf(STDERR "ERROR(routes): %s %s.\n", $devarg, $$session->err +or); } local $testkey = each %$result6; if ($testkey =~ m/^\Q1.3.6.1.2.1.4.24.4.1.\E/o) { print "Entering CIDR parsing\n"; foreach $key (keys %$result6) { if ($key =~ m/^\Q$ipCidrRouteProto\E/o) { my @temp = $key =~ m/^\Q$ipCidrRouteProto\E\.(\d{1,3}\.\d{ +1,3}\.\d{1,3}\.\d{1,3})\.(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\.0\.(\d +{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})/o; my $id = join ".", @temp; $routes{$id}{"proto"} = ${$result6}{$key}; } elsif ($key =~ m/^\Q$ipCidrRouteType\E/o) { my @temp = $key =~ m/^\Q$ipCidrRouteType\E\.(\d{1,3}\.\d{1 +,3}\.\d{1,3}\.\d{1,3})\.(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\.0\.(\d{ +1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})/o; my $id = join ".", @temp; $routes{$id}{"type"} = ${$result6}{$key}; } elsif ($key =~ m/^\Q$ipCidrRouteIfIndex\E/o) { # dest +network dest mask + next hop my @temp = $key =~ m/^\Q$ipCidrRouteIfIndex\E\.(\d{1,3}\.\ +d{1,3}\.\d{1,3}\.\d{1,3})\.(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\.0\.( +\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})/o; my $id = join ".", @temp; $routes{$id}{"net"} = $temp[0]; $routes{$id}{"mask"} = $temp[1]; $routes{$id}{"nexthop"} = $temp[2]; $routes{$id}{"ifindex"} = ${$result6}{$key}; } } } elsif ($testkey =~ m/^\Q1.3.6.1.2.1.4.21.1.\E/) { print "Entering non CIDR parsing\n"; foreach $key (keys %$result6) { my $index; if (($index) = $key =~ m/^\Q$ipRouteNextHop\E\.(.+)/o) { #print "Found next hop ${$result6}{$key} for $index in $ke +y\n"; $routes{$index}{"nexthop"} = ${$result6}{$key} ; #|| "NULL +" or die "route assignment failed\n"; $routes{$index}{"net"} = $index; } elsif (($index) = $key =~ m/^\Q$ipRouteIfIndex\E\.(.+)/o) { $routes{$index}{"ifindex"} = ${$result6}{$key}; # || "NULL +" or die "route assignment failed\n"; #print "Found destination ${$result6}{$key} for $index in +$key\n"; } elsif (($index) = $key =~ m/^\Q$ipRouteMask\E\.(.+)/o) { $routes{$index}{"mask"} = ${$result6}{$key}; # || "NULL" o +r die "route assignment failed\n"; #print "Found mask${$result6}{$key} for $index in $key\n"; } elsif (($index) = $key =~ m/^\Q$ipRouteProto\E\.(.+)/o) { $routes{$index}{"proto"} = ${$result6}{$key}; # || "NULL" +or die "route assignment failed\n"; #print "Found mask${$result6}{$key} for $index in $key\n"; } elsif (($index) = $key =~ m/^\Q$ipRouteType\E\.(.+)/o) { $routes{$index}{"type"} = ${$result6}{$key}; # || "NULL" o +r die "route assignment failed\n"; #print "Found mask${$result6}{$key} for $index in $key\n"; } } } %{$datahash{"routes"}} = %routes;
$session is a reference to the Net::SNMP object. $result6 is a reference to an anonymous hash containing the SNMP return values. Comment this all out and no memory leak.

Any ideas?

Replies are listed 'Best First'.
Re^3: Massive Perl Memory Leak
by BrowserUk (Patriarch) on Jun 14, 2007 at 18:11 UTC

    Best guess. If you just commented out the two lines that call $$session->get_entries(...), you'd still not see the memory growth. Not that that would be surprising, as the rest of the code wouldn't be doing much of anything.

    Looking at it from the other direction. Leave all the other code commented out and just call whichever of those two lines is appropriate--but do nothing with the results returned from the call--and the memory growth will return.

    $session is a reference to the Net::SNMP object

    Is that module thread safe? Are you reusing a single object to talk to multiple devices? Are you sharing one instance between threads?

    You might be able to check whether the session object is accumulating data internally long after you have finished with it by using Devel::Size on $session after each call to get_entries()


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Funny u mention the two lines. I left everything commented out *except* the first CIDR SNMP call, did nothing with the results, just let the data go nowhere, and the memory leak returned. Instead of using in the 700 meg range otherwise, it's now 1.2 GB. With that one call alone.

      Net::SNMP is pure Perl so should theoretically be thread safe. None of the objects are instantiated until we're working on a specific device. I call destroy on it at the end of its usage and even if it were left alone it should be overwritten at each loop.

      sub wreckobject { $session->close; undef $session; }
      And if my SNMP usage is fundamentally flawed u'ld think that all the other calls would blow it up far worse than the routing table lookups. I'll play around with various usages of the object, see if anything changes. The Net::SNMP module was the one thing even in the beginning I was hoping there was nothing wrong with and I did have a Murphy's Law gut feeling that the problem was in there.

        I know nothing about SNMP or Net::SNMP, but I just looked up the get_entries() method and the first thing I read is:

        This method performs repeated SNMP get-next-request or get-bulk-request ...

        I also notice that there is a [-maxrepetitions  => $max_reps,] optional argument which you aren't using. I don't know how to interpret that, but is there any possibility that you could be gathering a crap load more stuff than you are intending to?

        Also, since you are doing this on 100 device concurrently, maybe you would be better off using get_next_request() and fetching the columns 1 at a time? That said, it's not at all clear to me that doing so would reduce the memory usage. It looks like it might accumulate the data internally and retain it until the object is disposed of.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.