darwinscusp has asked for the wisdom of the Perl Monks concerning the following question:

Monks,

I have need of your guidance, for the following problem baffles my impure mind.

The following code works insofar as the Dumper output contains what I expect, but there is no parallel processing -- the children seem to operate sequentially rather than simultaneously.

If I execute the code in the second listing, I can't capture the output, but the children fork and work in parallel.

What am I doing wrong?

Thanks

Listing 1.
#!/usr/bin/perl -w use strict; use warnings; use Data::Dumper; use IO::Handle; my @servers = qw/server1 server2 server3/; chomp @servers; my %results = (); my @kids = (); for my $host (@servers) { pipe(CHILD_RDR,PARENT_WTR); PARENT_WTR->autoflush(1); if (my $pid = fork) { close PARENT_WTR; push @kids,$pid; $results{$host} = receive_message(\*CHILD_RDR); close CHILD_RDR; } else { die "Cannot fork: $!\n" unless defined $pid; close CHILD_RDR; my $cmd = q{/usr/bin/ssh } . $host . q{ /bin/date 2>/dev/null} +; my $result = qx{$cmd}; chomp $result; send_message(\*PARENT_WTR,$result); close PARENT_WTR; exit; } } for (@kids) { waitpid($_,0); } print Dumper %results; sub send_message { my $handle = shift; my $msg = shift; print $handle $msg; return; } sub receive_message { my $handle = shift; my $msg; chomp($msg = <$handle>); return $msg; }
Listing 2.
#!/usr/bin/perl -w use strict; use warnings; use Data::Dumper; use IO::Handle; my @servers = qw/server1 server2 server3/; chomp @servers; my @kids = (); for my $host (@servers) { if (my $pid = fork) { push @kids,$pid; } else { die "Cannot fork: $!\n" unless defined $pid; my $cmd = q{/usr/bin/ssh } . $host . q{ /bin/date 2>/dev/null} +; my $result = qx{$cmd}; chomp $result; print "$host $result\n"; exit; } } for (@kids) { waitpid($_,0); }

Replies are listed 'Best First'.
Re: Forking children operate sequentially?!
by ikegami (Patriarch) on Mar 11, 2010 at 19:17 UTC
    You wait for output from the youngest child before launching another, and the child presumably sends it's output when it's done.

    By the way, PARENT_WTR is poorly named since the parent doesn't write to it. I prefer TO_CHLD and FR_CHLD.

      Thanks ikegami.

      Is it possible to *not* wait for the child to return output, yet still capture it?

      Yeah, those names suck. I copied it from some other code, I think it originated from the cookbook.

        Is it possible to *not* wait for the child to return output, yet still capture it?

        Well, what you want to do is move the waiting to the outside of the loop, and wait for all the children "at once" (via polling, select, or whatever). Below is a solution using Thread::Queue.

Re: Forking children operate sequentially?!
by ikegami (Patriarch) on Mar 11, 2010 at 20:18 UTC
    #!/usr/bin/perl -w use strict; use warnings; use forks qw( async ); use forks::shared qw( ); use Thread::Queue qw( ); my $MAX_WORKERS = 4; my @SERVERS = qw( server1 server2 server3 ); sub process_request { my ($server) = @_; return qx{/usr/bin/ssh $server /bin/date 2>/dev/null}; } sub process_response { my ($server, $response) = @_; print("$server: $response"); } { my $req = Thread::Queue->new(); my $res = Thread::Queue->new(); my $num_workers = $MAX_WORKERS < @SERVERS ? $MAX_WORKERS : @SERVERS; $req->enqueue(@SERVERS); my @threads; for (1..$num_workers) { push @threads, async { while (my $server = $req->dequeue()) { $res->enqueue( [ $server, process_request($server) ] ); } }; $req->enqueue(undef); } process_response(@{$res->dequeue()}) for 0..$#SERVERS; $_->join() for @threads; }

    The problem with this method is that it fails badly if a child fails abnormally.

    Since nothing is added to $req once the children start getting created, you can simplify the code slightly by skipping $req->enqueue(undef); and changing $req->dequeue() to $req->dequeue_nb().