TheFluffyOne has asked for the wisdom of the Perl Monks concerning the following question:

I have a script that launches various external processes to perform tasks in a 3rd party software product. Unfortunately, these tasks will sometimes hang, and so the entire script will hang.

This is running in Perl 5.6 on AIX, and I am not permitted to install any additional modules. The solution I have come up with is shown below.

This seems to work OK, but I would like to know if there is a more clever way to do this. Also, this code doesn't cope well if the spawned process completes within the time limit. I'm guessing I should be able to use waitpid() to check see if the child has completed running; is that a sensible way to do it?

One final thing to note is that I need to work with any output returned from the spawned process. At present I am using code such as this:

my $rc = `/usr/bin/somecommand param1 param2`;

How might I get this output back into the parent process?

#!/usr/bin/perl -w use strict; if (!defined(my $pid = fork)) { print "Failed to fork.\n"; exit -1; } else { if ($pid == 0) { print "PID variable is zero; I am the child.\n"; # This is where we would execute the external process # Replaced this with a sleep loop to simulate a long-running p +rocess for (my $x = 0; $x < 10; $x++) { print "Sleeping iteration $x.\n"; sleep 1; } print "Child process has finished.\n"; exit 0; } else { print "I am the parent. Child PID is $pid.\n"; my $timer = 0; while ($timer < 5) { sleep 1; $timer++; } # After 5 seconds, kill the child print "Killing the child process.\n"; kill 1, $pid; print "Parent process finishing.\n"; } } print "End of the forked code block.\n";

Replies are listed 'Best First'.
Re: Launching an external process with a run-time limit
by blazar (Canon) on Oct 18, 2006 at 10:41 UTC
    I have a script that launches various external processes to perform tasks in a 3rd party software product. Unfortunately, these tasks will sometimes hang, and so the entire script will hang.

    You probably want to read on alarm.

    This is running in Perl 5.6 on AIX, and I am not permitted to install any additional modules.

    Not even in your home?!?

      Alarm looks like it might simplify things a bit, so thanks for that.

      I don't run AIX at home; this is at work :-P

Re: Launching an external process with a run-time limit
by Anonymous Monk on Oct 18, 2006 at 12:05 UTC
    Why reinvent the wheel? You're running on AIX, so you have a decent shell. This is a solved problem - reuse it. And you don't need additional modules.

    The code below works on the AIX system I tried it on:

    my $rc = `ulimit -t 5; /usr/bin/somecommand param1 param2`;
      I was under the impression that ulimit -t worked on CPU seconds rather than wall-clock seconds, and therefore if the hung process wasn't using CPU cycles this wouldn't work. I'll have to give this a go.
Re: Launching an external process with a run-time limit
by jpollack (Novice) on Oct 18, 2006 at 13:30 UTC
    Hello, I use a function like this as my backtick replacement (with timeouts). I use alarm instead of fork & select.
    sub texec { my $timeout = shift; my $out = []; eval { local $SIG{ALRM} = sub { die "timeout\n" }; alarm ($timeout); # set timer .. hope the platform has signals my $cpid = open (my $fh, "-|") || exec (@_); $| = 1; # line buffer while (<$fh>) { chomp; push (@{$out}, $_); } close ($fh); alarm 0; # reset, we've got all the data and we're cool }; # If something died in the eval, return a ref to a list containing # the output as well as the die text. otherwise just the output. return (($@) ? [$out, $@] : $out); }
    One downside is that I haven't manged to shave the yak to get capturing of both STDOUT and STDERR to work on both Windows and *n*x.

    Usage is:

    my $res = texec (10, "/usr/bin/ls", "-al", "/"); if (ref ($res->[0])) { print "Error while executing: $res->[1]\nCommand output:\n"; print Dumper ($res->[0]), "\n"; } else { print "Command output:\n"; print Dumper ($res), "\n"; }
      my $cpid = open (my $fh, "-|") || exec (@_);
      Two things can fail here, and both times, you continue on if nothing has happened. The open may fail - if it does, the subroutine performs an exec, to never return (unless the exec fails). The exec might fail as well, meaning that the child executes the rest of the subroutine - and the rest of the program.
      $| = 1; # line buffer
      This unbuffers STDOUT. For the rest of the program. Now, your entire subroutine doesn't write the STDOUT, making the unbuffering pointless for the subroutine itself, and possibly damaging for the rest of the program.
      close ($fh);
      No checking of the return value? $fh is a pipe, and certain failures won't be reported until you close the pipe.

      Also, if the timeout is triggered, the parent is killed (inside an eval). However, the spawned process goes on. Now, you might be lucky and it will write something to the pipe, getting a SIGPIPE and not surviving that, but you ought to kill the process (whose PID you store in $cpid, which currently is unused).

      Excellent, thanks for the code. It looks like this does what I need, though I'll have to see whether the STDOUT/STDERR capture is an issue. Cheers!
Re: Launching an external process with a run-time limit
by fenLisesi (Priest) on Oct 18, 2006 at 10:39 UTC
      Thanks, that IPC stuff may come in very handy.