in reply to Track command time

I like this code:
sub execshell { # execute a shell command with timeout my ($cmd,$timeout)=@_; $timeout||=5; # seconds my ($result,$pid,$time); eval { local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required local($/)=undef; alarm $timeout; $pid=open(CMD,"$cmd 2>&1 |"); # run in a shell if ($pid) { $result=<CMD>; close CMD; } else { alarm 0; } alarm 0; }; if ($@) { die $@ unless $@ eq "alarm\n"; # propagate unexpected errors $result='TIMEOUT'; } $result; }
This will time out after 5 seconds by default, and then return result TIMEOUT which you can test for. Otherwise it returns the output of the shell command, or it dies with a message if the shell command was bad for some reason. HTH! SSF

Replies are listed 'Best First'.
Re^2: Track command time
by Illuminatus (Curate) on Oct 03, 2008 at 17:10 UTC
    This approach (using open vs system) has the added advantage of allowing for concurrency. If you want to run N commands simultaneously, and report errors on those that timeout, this is easy to do. You can use select on all the open filehandles to catch when they output/complete.

      No. You can only have one alarm and the sequence of setting/resetting $SIG{ALRM} will certainly lead to race conditions.