use Proc::Forkfunc;
use strict;
$|++; # autoflush/unbuffer
my @child_args = qw(1 2 );
forkfunc(\&child_func, @child_args);
forkfunc(\&child_func, 3);
forkfunc(\&child_func, 4);
forkfunc(\&child_func, 5);
sub child_func {
# sleep for rand(3)*rand(3)
select undef, undef, undef, rand(3) * rand(3);
print shift(@_);
print "\n";
}
Proc::Forkfunc has the annoying habit of
printing to STDERR
"call to child returned", but it's not a complex module, and if you crack it open (look at the source), you can easily figure out what's goin on.
You should also take a look at perlfork. This will work for most systems, but remember, forking is experimental on Win32 machines, and is not available before v5.6, but there is an alternative, Win32::Process. | [reply] |
Here is some Thread::Queue based code that demonstrates another way to resolve the parallel-ping problem. You can use it to model generic parallel processing subroutines where you have one work queue, and multiple workers. The results generated by the workers is fed back into a single queue, that is read by the main process. | [reply] |
Side note comment: a cheap albeit admittedly much more limited alternative to gain some parallelization through open can be found here.
| [reply] |
| [reply] |
#!/usr/bin/perl -w
use strict;
use POSIX ":sys_wait_h";
$|++;
my @hosts = qw( www.perlmonks.com www.perl.com www.netscape.com );
my (@pids, $count);
for my $host(@hosts) {
sleep 1; # this limits to 1 kid per second, not actually required
$count++;
my $pid = fork();
push @pids , $pid;
die "Fork failed\n" unless defined $pid;
next if $pid; # get parent to reiterate and fork more kids
my $reply = `ping -n 1 $host`; # get kids pinging, single ping
print "I am child $count, pinged $host\n$reply\n\n";
exit; # kill child
}
# wait for kids to finish, no zombies on us
my $kids;
do {
$kids = waitpid(-1,&WNOHANG);
} until $kids == -1;
print "Spawned $count kids, waited on @pids\nAll done!";
| [reply] [d/l] |