crackotter has asked for the wisdom of the Perl Monks concerning the following question:

Hi monks, I currently have a script which run thousands of similiar proccess (snmp polling), 100 at a time, with the Thread::Pool class. Due to problems with perl 5.8 and threads I was planning on moving to forks. How would I go about doing this? Just a basic example of a fork "pool" be greatly helpful.

Replies are listed 'Best First'.
Re: Fork Pool?
by liz (Monsignor) on Oct 08, 2003 at 14:27 UTC
    Have you tried forks as a replacement for threads? That should work without any code changes.

    Otherwise have a look at Parallel::ForkManager.

    Liz

      I have been using Parallel::ForkManager for about a month to do simultaneous ftp uploads to our web server. The ftp can handle 5 connections at a time and I just modified their children tutorial to implement this in no time.
Re: Fork Pool?
by jmanning2k (Pilgrim) on Oct 08, 2003 at 14:47 UTC
    I've mentioned this before, but I like the forking code in SpamAssassin. It's licensed under both the GPL and the Perl Artistic license, so it's pretty flexible.

    See the function &start_children in this module. It uses IO::Socket to communicate with each child process. $opt_j is a flag for max threads.

    The reap_children function is also useful.

    Just write a function to "&do_something" for each child (replacing the $self->run_message line), then modify the parent code to supply your data to each child process.

    You can start off a pool of 50 or 100 child processes, then they'll run until all your jobs are complete.

    Update:I compared the current SA code to the code I typically use now (originally derived from the SA code). My version is less tied to their model of doing things, and might be more flexible/easier to understand. Besides, a concrete example makes what I was trying to say more clear.

    ~Jon
Re: Fork Pool?
by Rhys (Pilgrim) on Oct 08, 2003 at 17:37 UTC
    Since your problem has specifically to do with SNMP polling, you probably ought to have a look at the callback function in the SNMP.pm module that comes with Net-SNMP (NOT the Net::SNMP module from CPAN). With it, you can set up several (more than 100) concurrent SNMP sessions at once. I use this to scan my network for new devices. Here's the code:
    ... # Open the SNMP session and query in the background. $sess = new SNMP::Session(DestHost => $thisip, Community => $$optionsref{community}, Retries => $$optionsref{retries}, Timeout => $$optionsref{timeout}, Version => '1'); $mib = 'sysDescr'; $vb = new SNMP::Varbind([$mib]); # The responses to our queries are stored in %list. $var = $sess->getnext($vb, [ \&gotit, $thisip, \%list ]); # Update the rate limiting counter. $count++; # After every 100 IP's, wait for the timeout period (default is two se +conds) to keep from overwhelming routers with ARP queries. if ( $count > '100' ) { &SNMP::MainLoop($looptimer); $count = 0; } # Increment the IP address for the next pass. $intip++; ... # This is the little function called by SNMP::MainLoop when a callback + comes in. It just stuffs the value of the response (if any) into th +e appropriate place in %list and returns. sub gotit { my $myip = shift; my $listref = shift; my $vl = shift; if ( defined $$vl[0] ) { $$listref{$myip}{desc} = $$vl[0]->val; } return(); }
    You then set your "connection" limit however you want. I used 100 as an arbitrary limit above. Since IP addresses are accepted by inet_aton($someint), you can just convert IPs to integers and increment them to step through a subnet or range of IP addresses. Here's the conversion:
    ($octet1, $octet2, $octet3, $octet4) = split /\./, $$argsref{ip}; $intip = ($octet1 * (256 ** 3)) + ($octet2 * (256 ** 2)) + ($octet3 * +256) + $octet4;
    Probably a little kludgy, but it works. There's probably a more elegant way to do it using inet_aton() and then converting directly from the 32-bit packed value to an int, but I haven't worked it out yet.

    Anyway, hopefully the callback stuff above will help you out. See 'perldoc SNMP' for more info on that.

    --Rhys