in reply to Use of shared memory to persist data between child processes

You could simplify this by using a module that already tackles this problem: Parallel::ForkManager.

Here is a snippet of code that uses it to have child processes update a hash in the parent:

#!/usr/bin/perl -w use Net::FTP; use Parallel::ForkManager; use strict; my %srvs = (); my $pm = new Parallel::ForkManager(10); $pm->run_on_start( sub { print STDERR "Connecting to $_[1], port $srvs +{$_[1]}{port}\n"; } ); $pm->run_on_finish ( sub { my (undef, $exit_code, $ident) = @_; if ( $exit_code == 0 ) { $srvs{$ident}{stat} = "Good logon to $ +ident\n"; } elsif ( $exit_code == 1 ) { $srvs{$ident}{stat} = "*** Logon to $i +dent failed\n"; } elsif ( $exit_code == 2 ) { $srvs{$ident}{stat} = "*** Connect to +$ident failed\n"; } else { $srvs{$ident}{stat} = " Script error while connecting to $i +dent\n"; } print STDERR $srvs{$ident}{stat}; } ); sub ftpcheck { my $id = shift; my $srv=$srvs{$id}; my $status = 1; my $ftp=Net::FTP->new($$srv{addr},Timeout=>15,Port=>$$srv{port}); exit(2) if ! $ftp; $status = 0 if $ftp->login($$srv{user},$$srv{pass}); # Change s +tatus to 'good' if logon works $ftp->quit(); exit ($status); } # do stuff to set up %srvs as hash of FTP servers and associated infor +mation here # when %srvs is setup, iterate through the keys like so: for my $child ( keys %srvs ) { my $pid = $pm->start($child) and next; &ftpcheck($child); $pm->finish($child); } $pm->wait_all_children; # Quick mail hack. I was in a hurry open MAILFILE, "| mail -s 'ftpcheck results' $notify"; print MAILFILE $srvs{$_}{stat} for sort keys %srvs; close MAILFILE;

But the obvious limitation here is that a child has to exit before the parent can be updated.