smithgw has asked for the wisdom of the Perl Monks concerning the following question:

All -- I am stuck on a perl script that does a forking queue. (I am trying to kick off 10 AIX system backups at the same time). My script will also monitor if one of the backups runs too long and kill that process. This is when my problem occurs, if I killed off the forked process (see below my hash #child_output), I don't get any output, because the ssh has not finished. If the process does complete normally, I get the output that I expected. I am not sure if there is a way around my problem, given how I wrote my code, but I thought I might see if someone else has any insight...
sub fork_backup { my $lpar = shift; if ( ( scalar keys %children ) < $max_children ) { my $pid = fork; if ( $pid ) { # this is parent process $children{$pid} = $lpar; print "Child Backup [$pid][".$children{$pid}."] started a +t ".localtime()."\n"; $processing_time{$pid} = time(); } else { if ( not defined $pid ) { die "\n\nWoah: Failed to fork a child!\n"; } # this is child process # This is where the meat goes! my $backup_command; if ( ($lpar_oslevel{$lpar}) eq "5.3") { $backup_command = "ls -al /;sleep 20"; } #Put the rea +l sysback command in here later (for 5.3) else { $backup_command = "ls -al /;sleep 30"; } #Put the rea +l sysback command in here later (for all other) # Fire off the backup command, whew! $child_output{$lpar}=`ssh -q $lpar "$backup_command"`; # exit child process exit 0; } } else { # too much child labor! queue for later under complete_backu +p unshift(@queue,$lpar); } }

Replies are listed 'Best First'.
Re: Forking Problem
by JavaFan (Canon) on Aug 27, 2008 at 20:53 UTC
    Instead of forking directly, I would do a pipe open (with -| or |-). Then in the children, I'd exec the ssh command. Now you should be able to read from the pipes in the parent process (I would use a select loop, but there are also modules to help you if you find the select loop idiom awkward).

    I don't have time today to write an example. If you aren't able to solve your problem by tomorrow, I try to remember to revisit this note and write an example.

      Thanks for the insight. However, I am a little stumped with your answer. Would all the children be pushing to the same pipe? (I was trying to keep everything seperate, ie seperate log file for each ssh I kick off). Thanks
        No, each time you do
        my $pid = open(my $child, "-|") // die "Fork failed: $!";
        you fork, and get a different pipe (there will be a pipe for each child).

        You might want to look in the "Safe Pipe Opens" section of the perlipc manual page for example code, and a description on how to open a pipe between a parent and a child.

Re: Forking Problem
by repellent (Priest) on Aug 28, 2008 at 19:21 UTC
    Instead of reading in STDOUT of `ssh`, redirect the output to a file:
    $backup_command = "backup.pl >! output123.txt";

    Also, when "killing" the process, you may wish to send a SIGTERM first before SIGKILL (like kill -9). This gives your $backup_command a change to catch and handle the signal to end its operation (i.e., flushing out its output buffer).

    See perlipc on how to catch signals.
    See Suffering from Buffering on how to flush output.