Thanks BrowserUK
I feared that threads under windows were flakey but I guess that is old news. If you think they are safe I am more than happy to use them. Have done quite a bit under *nix with them and for a simple job like this they work perfectly there so this does give me the one size fits all solution.
Cheers, R.
Pereant, qui ante nos nostra dixerunt!
| [reply] |
#! perl -slw
use strict;
my( $cmd, $timeout ) = @ARGV;
print "Running '$cmd' for $timeout seconds";
my $pid = open CMD, "$ARGV[ 0 ] |"
or die "'$ARGV[ 0 ] : $!";
print $pid;
sleep 1 while $timeout-- and kill 0, $pid;
my $rv = kill 9, $pid;
my @capture = <CMD>;
print "It took too long, so I killed it" if $rv;
print 'It produced the following output:';
printf $_ for @capture;
That will allow the spawned process to run for the timeout number of seconds before killing it and returning whatever output it had managed to produce--but there are problems.
- The timeout will always run to completion, even if the process finishes early.
The sleep 1 while $timeout-- and kill 0, $pid; is meant to allow the timeout to be shorted circuited if the process finishes, but it doesn;t work.
Even though kill 0, $pid will return false if the process never existed, it seems to continue to return true, once it has returned true once, even after the process has gone away?
I think this is a bug in perl's implementation of kill on win32, but I a, finding it hard to confirm that.
- If the process produces a large volume of output, then the pipe between the processes "fills" and the spawned process will block until the spawning process reads some data from the it's end of the pipe.
That means that the spawned process will always be killed and only partial output returned, even if it could produce all of the data within the timeout period if it didn't get blocked. Ie. If the spawning process was serviceing it's end of the pipe.
This problem could be alleviated by reading this end of the pipe as the output is being produced, but of course, the moment we go into a read state on the pipe, we block until the spawned process produces output. Back to problem one.
So the next thing (actually, the original thing) I tried was to use select on the pipe handle to determine if tehre was something available to read before attempting a read, but neither select, nor IO::Select->can_read() seem to work on (Win32) pipes?
If this limitation is documented, I have been unable to find it.
The upshot. If your process only produces a small volue of output, and you can live with always waiting for the full timeout period, the above code, or the threaded version above may be usable, but otherwise, you'd best consider some of the other options.
Sorry if I gave you false hope, but I've long since given up putting to much effort into exploring things until the OP shows some interest in the possible solution I am offereing. I've spent way to many hours exploring and testing possible solutions only to have the OP pop back and say "Oh, but I don't like threads!", or "Your code doesn't work exactly the way I want it to so I'm not going to be bothered to try and correct it myself, I'm just coing to complain and do something completely different.".
If that sounds a little jaundiced--it is:(
Examine what is said, not who speaks.
Silence betokens consent.
Love the truth but pardon error.
| [reply] [d/l] [select] |
# this is a direct c&p from a larger opus of code
# but you should get the idea.
# command is worked out earlier depending on the
# platform we are running on
my $timeout=60;
my @sqlo;
my $Q = new Thread::Queue;
threads->create(
\&RunInThread,
$command,
$Q
);
my $pid = $Q->dequeue;
for (1..$timeout) {
sleep 1;
my $result=$Q->dequeue_nb; # non_block, return undef if nowt o
+n queue
next unless $result; # twas nowt on queue
if ($result eq "$pid is done") {
$trace->trace("SQL process finished within $_ seconds");
last;
}
push @sqlo, $result;
}
my $killed = kill 9, $pid if kill 0, $pid; # kill child if still
+alive
if ($killed) {
$trace->trace("Had to kill SQL process, taking more than $time
+out secs");
}
sub RunInThread {
my ( $cmd, $Q) = @_;
my $pid = open CMD, "$cmd |" or $trace->die("$cmd : $!");
$Q->enqueue($pid);
$Q->enqueue( $_ ) while defined( $_ = <CMD> );
$Q->enqueue("$pid is done");
}
The multithreading is also great as I may have several DBs to examine so a little refactoring and I can do them all in parallel improving the chance of completion within the 120 sec time limit no end.
Cheers, R.
Pereant, qui ante nos nostra dixerunt!
| [reply] [d/l] |