in reply to Re: How to improve memory usage in this script??
in thread How to improve memory usage in this script??

Your session objects are not being DESTORYed because you have defined them to be nonblocking. References to them are still being kept around by the Net::SNMP dispatcher. The close method doesn't deregister them from the dispatcher.

Just switch to blocking objects. Alternatively, you can fork off one subnet at a time to a child process and its memory will get reclaimed when it exits.

Replies are listed 'Best First'.
Re^3: How to improve memory usage in this script??
by Anonymous Monk on Feb 04, 2008 at 11:21 UTC
    actually, if blocking mode is set i cant use callbacks, they only work at non-blocking mode. im really very inexperienced in fork. could u give me an example for that?? also, is it possible to specify for number process at a time instead of using the subnet for regulating these?. Btw, the reason i said that 60mb is too much, is because it hasnt done the Real Job yet, which would be to store the data in variables then print it, but if just for scanning and receiving timeouts im using 60mb i can only wonder how much it will take after it stars receiving data.

      Try it this way. Create a set number of sessions, then dispatch events (using snmp_dispatch_once()) until you get a reply. Then create another and dispatch events until you get another reply, and so on. Adjust the value of $MAXCONCURRENT to control your memory usage.

      (Note:Untested code):

      #!/usr/bin/perl -w use warnings; use strict; use Net::IP; use Net::SNMP qw( snmp_dispatch_once oid_lex_sort ); #use Smart::Comments '###'; my $startip = $ARGV[0] || die "Missing Starting IP"; my $endip = $ARGV[1] || die "Missing Ending IP"; my $community = $ARGV[2] || die "Missing community string"; my $ips = Net::IP->new( "$ARGV[ 0 ] - $ARGV[ 1 ]" ); my $MAXCONCURRENT = 50; my $running = 0; ### Updated do{} while (taken from docs) to while(){} per reply. while( ++$ips ) { my( $session, $error ) = Net::SNMP->session( -hostname => $ips->ip, -version => 'snmpv2c', -nonblocking => 1, -community => "$community", -timeout => 3, -retries => 1, ); if( defined( $session ) ) { my $serialno = '.1.3.6.1.3.83.1.1.4.0'; my $mac = '.1.3.6.1.2.1.2.2.1.6.2'; my @msoids = ( $mac, $serialno ); my $result = $session->get_request( -varbindlist => \@msoids, -callback => [ \&getms, $session, $ips->ip ] ); $running++; ## Count the sessions started }else{ warn "Session not defined for %s: %s\n", $ips->ip, $session->e +rror_status; }; ## start another unless we have the max running next unless $running > $MAXCONCURRENT; ## Dispatch events until we get a reply from one snmp_dispatch_once() while $running > $MAXCONCURRENT; } exit; sub getms { my $obj = shift; my $session = shift; my $hfcip = shift; $running--; ## One more done if (!defined($obj->var_bind_list)) { warn "$hfcip SNMP Error.", $obj->error_status,"\n"; return; } ## print values for the oids $session->close; }

      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        This should work, and it's better than the forking idea because it keeps the dispatcher busy with monitoring at least $MAXCONCURRENT requests at a time. Let us know if you still have memory issues with this approach.