Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

This script will run snmp queries on multiple routers at a time, its working fine, but memory has me worried. just for testing i scanned a non existan routers so they timed out, if it times out then the script will do nothing but keep on scanning for a live router. so i scanned from 10.0.0.0 to 10.0.255.255 and ram was increasing from 10mb to 60mb when scanning was at 10.0.100.0. that cant be right, can it? i mean it has actually done nothing but to scan and return "time out" for all these ips. so what is using so much ram?? it might be the sessions hash, but ive tried closing them but they still seem to be open. can i do something to avoid such amounts of ram being used??
#!/usr/bin/perl -w use warnings; use strict; use Net::SNMP qw(snmp_dispatcher oid_lex_sort); #use Smart::Comments '###'; my $startip = $ARGV[0] || die "Missing Starting IP"; my $endip = $ARGV[1] || die "Missing Ending IP"; my $community = $ARGV[2] || die "Missing community string"; my @ip_start = split(/\./,$startip); my @ip_end = split(/\./,$endip); my $contadr = 0; my($i,$j,$k,$l); { for ($l="$ip_start[0]";$l<="$ip_end[0]";$l++){ for ($i="$ip_start[1]";$i<="$ip_end[1]";$i++){ for ($j="$ip_start[2]";$j<="$ip_end[2]";$j++){ for ($k="$ip_start[3]";$k<="$ip_end[3]";$k++){ my ($session,$error)=Net::SNMP->session(-hostname=>"$l.$i.$j.$k", -version => 'snmpv2c', -nonblocking=>1, -community=>"$community", -timeout=>3, -retries=>1, ); if (defined($session)) { my $serialno='.1.3.6.1.3.83.1.1.4.0'; my $mac='.1.3.6.1.2.1.2.2.1.6.2'; my @msoids=($hfcmac,$serialno); my $result=$session->get_request( -varbindlist=>\@msoids, callback=>[\&getms,$session,"$l.$i.$j.$k"] ); $session->close; ## reason for all this close, is that they dont seem to get #closed no + matter where i put them }else{ print "Session not defined! $error\n"; }; $session->close; }; snmp_dispatcher(); }; }; }; }; exit; { sub getms { my $obj = shift; my $session = shift; my $hfcip = shift; if (!defined($obj->var_bind_list)) { warn "$hfcip SNMP Error.",$obj->error(),"\n"; return; }; ## print values for the oids }; };

Replies are listed 'Best First'.
Re: How to improve memory usage in this script??
by jwkrahn (Abbot) on Feb 03, 2008 at 15:07 UTC

    Instead of using four nested for loops you could do it like this:

    #!/usr/bin/perl use warnings; use strict; use Net::SNMP qw(snmp_dispatcher oid_lex_sort); #use Smart::Comments '###'; use Socket; my $startip = $ARGV[ 0 ] or die 'Missing Starting IP'; my $endip = $ARGV[ 1 ] or die 'Missing Ending IP'; my $community = $ARGV[ 2 ] or die 'Missing community string'; my $ip_start = unpack 'N', inet_aton $startip; my $ip_end = unpack 'N', inet_aton $endip; my $contadr = 0; for my $i ( $ip_start .. $ip_end ) { my $ip = inet_ntoa pack 'N', $i my ( $session, $error ) = Net::SNMP->session( -hostname => $ip, -version => 'snmpv2c', -nonblocking => 1, -community => $community, -timeout => 3, -retries => 1, ); if ( defined $session ) { my $serialno = '.1.3.6.1.3.83.1.1.4.0'; my $mac = '.1.3.6.1.2.1.2.2.1.6.2'; my @msoids = ( $hfcmac, $serialno ); my $result = $session->get_request( -varbindlist => \@msoids, callback => [ \&getms, $session, $ip ] ); $session->close; ## reason for all this close, is that they dont seem to get #closed no + matter where i put them } else { print "Session not defined! $error\n"; } $session->close; snmp_dispatcher(); } exit;
Re: How to improve memory usage in this script??
by BrowserUk (Patriarch) on Feb 03, 2008 at 13:23 UTC
    so i scanned from 10.0.0.0 to 10.0.255.255 and ram was increasing from 10mb to 60mb when scanning was at 10.0.100.0. that cant be right, can it?

    60 MB / ( 100 * 256 ) ~= 2kb per concurrent session. Doesn't sound excessive to me?


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Your session objects are not being DESTORYed because you have defined them to be nonblocking. References to them are still being kept around by the Net::SNMP dispatcher. The close method doesn't deregister them from the dispatcher.

      Just switch to blocking objects. Alternatively, you can fork off one subnet at a time to a child process and its memory will get reclaimed when it exits.

        actually, if blocking mode is set i cant use callbacks, they only work at non-blocking mode. im really very inexperienced in fork. could u give me an example for that?? also, is it possible to specify for number process at a time instead of using the subnet for regulating these?. Btw, the reason i said that 60mb is too much, is because it hasnt done the Real Job yet, which would be to store the data in variables then print it, but if just for scanning and receiving timeouts im using 60mb i can only wonder how much it will take after it stars receiving data.