Re: capture stdout and stderr from external command
by zentara (Cardinal) on Nov 11, 2011 at 18:53 UTC
|
Have you seen this before?
#!/usr/bin/perl
use warnings;
use strict;
use IPC::Open3;
use IO::Select;
my $pid = open3(\*WRITE, \*READ,\*ERROR,"/bin/bash");
my $sel = new IO::Select();
$sel->add(\*READ);
$sel->add(\*ERROR);
my($error,$answer)=('','');
while(1){
print "Enter command\n";
chomp(my $query = <STDIN>);
#send query to bash
print WRITE "$query\n";
foreach my $h ($sel->can_read)
{
my $buf = '';
if ($h eq \*ERROR)
{
sysread(ERROR,$buf,4096);
if($buf){print "ERROR-> $buf\n"}
}
else
{
sysread(READ,$buf,4096);
if($buf){print "$query = $buf\n"}
}
}
}
waitpid($pid, 1);
# It is important to waitpid on your child process,
# otherwise zombies could be created.
| [reply] [d/l] |
|
|
my $max_procs = 60;
my $pm = new Parallel::ForkManager($max_procs);
foreach my $child ( 0 .. $#cmds ) {
my $pid = $pm->start($cmds[$child]) and next;
# This is where I need to get the stdout and stderr.
# cmds can be external command which can be windowsexecutable.exe
+$args (for example perl.exe script.pl $arg1 $arg2)
system("cmds");
my $Result = $? >> 8;
$pm->finish($Result); # pass an exit code to finish
}
$pm->wait_all_children;
The above sample code set the $max_procs to 60 and forks 60 child process each executing one system command. All I need is to get the stdout and stderr of the executable executed using the system command into some perl variable | [reply] [d/l] |
|
|
| [reply] |
|
|
All I need is to get the stdout and stderr of the executable executed using the system command into some perl variableYour problem is that forking puts the $cmd into a different $pid, and if you put your stdout and stderr into a perl variable, it won't be seen in the parent. You will need to open some pipes from the children back to the parent, to write back your returns. Or, you could use threads or some other form of IPC, like shared memory segments. See forking with Storable and IPC::ShareLite
| [reply] |
|
|
Windows can also do 2>&1 like bash. So how about like this?
$pm->start($cmds[$child] . " > /tmp/$$.$child.log 2>&1")
$child may be unique through commands. And collect the outputs afterwards?
| [reply] [d/l] |
Re: capture stdout and stderr from external command
by Kc12349 (Monk) on Nov 11, 2011 at 23:58 UTC
|
Take a look at IO::CaptureOutput. It will allow you to capture stdout and stderr returned from external commands.
| [reply] [d/l] |
Re: capture stdout and stderr from external command
by juster (Friar) on Nov 11, 2011 at 23:47 UTC
|
The simplest solution that pops into my mind is a piped open. I haven't tried this on windows but it doesn't do anything fancy:
#!/usr/bin/env perl
use warnings;
use strict;
my %cmds;
my ($count, $limit, $actmax) = (0, 10, 2);
my ($cmdline) = 'sleep 1; echo BOOM';
# I think this command will work in Windows but not sure...
sub spawncmd
{
my ($cmd) = @_;
my $pid = open(my $of, '-|', $cmd) or die "open: $!";
++$count;
$cmds{$pid} = $of;
warn "Spawned $pid";
return $pid;
}
sub cleancmd
{
my ($pid) = @_;
die "error: unknown cmd pid: $pid" unless(exists $cmds{$pid});
my $of = delete $cmds{$pid};
print "$pid: $_" while(<$of>);
close($of);
}
spawncmd($cmdline) for(1 .. $actmax);
while(keys(%cmds) > 0){
my $pid = waitpid(-1, 0);
die sprintf("error: pid $pid had exit code %d", $? >> 8) if($? !=
+0);
cleancmd($pid);
spawncmd($cmdline) if($count < $limit);
}
This simplifies things by blocking until one of the running commands exits, then reading the command's output all at once before closing the pipe. Standard output and error streams are read together and not separated with a piped open. You didn't specify whether you wanted them together or separate. Parallel processing seems unnecessary in this situation but maybe you have performance requirements you didn't mention. | [reply] [d/l] |
|
|
...reading the command's output all at once before closing the pipe.
The problem with this approach is if the pipe's buffer should fill up, you have a deadlock (i.e. the spawned program doesn't exit because it waits for the buffer to be emptied, but this won't happen because you wait for the program to exit). Replace echo BOOM with perl -e"print q(x)x1e5" and you can observe the effect.
| [reply] [d/l] [select] |
|
|
| [reply] [d/l] |
Re: capture stdout and stderr from external command
by Anonymous Monk on Nov 11, 2011 at 17:39 UTC
|
What is an "external" command? Can you give an example? | [reply] |
|
|
The external command is an windows executable with some arguments passed to it. The executable prints some statistics on the command window while some args are passed to it. For example, its something like
"perl.exe script.pl $args" (where $args change for each child process being forked)
My aim is to stop printing the executable output into the command window and instead consolidate the output of 5000 runs of executable with different arguments and selectively print them in the end.
| [reply] |
|
|
| [reply] |
Re: capture stdout and stderr from external command
by locked_user sundialsvc4 (Abbot) on Nov 15, 2011 at 12:17 UTC
|
I would arrange this problem by spawning n (say, 60...) workers, each of which withdraws an $args string from a serialized queue and executes that command, waiting for the command to complete. (Remember that you will therefore have n * 2 processes running at the same time: the workers and the commands, but also that you will never have any more than that, no matter how large the work-queue may be.)
After a worker has launched a command and the command has completed, the worker is also responsible for disposing of its output. Perhaps this involves the use of a mutual-exclusion object, which the worker is obliged to lock before it may transfer the output to the permanent store, and to release when it has finished doing so.
The workers, once launched, will survive until the queue has become exhausted, then they will all politely put away all of their toys, bid the world adieu, and die. (The main thread, after having launched the workers, really has nothing at all to do except to wait for its children to all die, which means that the job is done.)
The number of workers is a tunable parameter (60) which is entirely independent of the size of the queue (5,000) and therefore it can be freely adjusted. If the system begins to “thrash,” simply ease off on the throttle next time. With a little bit of experimentation you will quickly find the “sweet spot” for your particular system.
| |