in reply to Re^2: How to improve memory usage in this script??
in thread How to improve memory usage in this script??

actually, if blocking mode is set i cant use callbacks, they only work at non-blocking mode. im really very inexperienced in fork. could u give me an example for that?? also, is it possible to specify for number process at a time instead of using the subnet for regulating these?. Btw, the reason i said that 60mb is too much, is because it hasnt done the Real Job yet, which would be to store the data in variables then print it, but if just for scanning and receiving timeouts im using 60mb i can only wonder how much it will take after it stars receiving data.
  • Comment on Re^3: How to improve memory usage in this script??

Replies are listed 'Best First'.
Re^4: How to improve memory usage in this script??
by BrowserUk (Patriarch) on Feb 04, 2008 at 12:56 UTC

    Try it this way. Create a set number of sessions, then dispatch events (using snmp_dispatch_once()) until you get a reply. Then create another and dispatch events until you get another reply, and so on. Adjust the value of $MAXCONCURRENT to control your memory usage.

    (Note:Untested code):

    #!/usr/bin/perl -w use warnings; use strict; use Net::IP; use Net::SNMP qw( snmp_dispatch_once oid_lex_sort ); #use Smart::Comments '###'; my $startip = $ARGV[0] || die "Missing Starting IP"; my $endip = $ARGV[1] || die "Missing Ending IP"; my $community = $ARGV[2] || die "Missing community string"; my $ips = Net::IP->new( "$ARGV[ 0 ] - $ARGV[ 1 ]" ); my $MAXCONCURRENT = 50; my $running = 0; ### Updated do{} while (taken from docs) to while(){} per reply. while( ++$ips ) { my( $session, $error ) = Net::SNMP->session( -hostname => $ips->ip, -version => 'snmpv2c', -nonblocking => 1, -community => "$community", -timeout => 3, -retries => 1, ); if( defined( $session ) ) { my $serialno = '.1.3.6.1.3.83.1.1.4.0'; my $mac = '.1.3.6.1.2.1.2.2.1.6.2'; my @msoids = ( $mac, $serialno ); my $result = $session->get_request( -varbindlist => \@msoids, -callback => [ \&getms, $session, $ips->ip ] ); $running++; ## Count the sessions started }else{ warn "Session not defined for %s: %s\n", $ips->ip, $session->e +rror_status; }; ## start another unless we have the max running next unless $running > $MAXCONCURRENT; ## Dispatch events until we get a reply from one snmp_dispatch_once() while $running > $MAXCONCURRENT; } exit; sub getms { my $obj = shift; my $session = shift; my $hfcip = shift; $running--; ## One more done if (!defined($obj->var_bind_list)) { warn "$hfcip SNMP Error.", $obj->error_status,"\n"; return; } ## print values for the oids $session->close; }

    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      This should work, and it's better than the forking idea because it keeps the dispatcher busy with monitoring at least $MAXCONCURRENT requests at a time. Let us know if you still have memory issues with this approach.
        ok thx a lot for the script browseruk, thx for the fast response! ;). now when i run it it gave me an error "Can't "next" outside a loop block" so used while( ++$ips ){}; instead of the do(). now as for the memory usage, it now took a lot more. When at 10.0.43 it was already at 60mb and increasing.(remember that all i got so far is timeouts, so 60mb really is too much in my opinion) im interested in the fork method because of this "you can fork off one subnet at a time to a child process and its memory will get reclaimed when it exits.". so if i understood correctly, you would scan 255 ips with a child, then when done all thememory they used will get returned?. So, if the script takes 10mb scanning 10.0.0.0-10.0.0.255 the 10.0.0.0-10.0.1.255 scan should take aproximately the same??