in reply to How to capture process and redirect STDOUT in a hash

I haven't tried to use Net::OpenSSH(::Parallel), but the docs for Net::OpenSSH say that the $ssh->system() method accepts hash-style parameters for "default_stdout_file" and "default_stderr_file" (storing these outputs to named files), or "defailt_stdout_fh" and "default_stderr_fh" (passing file handles to receive these outputs).

So one approach might be, for each process in your queue, save stdout and stderr to distinct files (using different names for each process), and then read those files back when the ssh->system calls are all done.

I expect you could also open file handles that write to in-memory scalar variables (see description of open(FH,'>',\$variable) in the man page for open), pass those file handles to ssh->system() as default stdout/stderr, and then just do regex matches on those variables when the processes are done.

I think you'll want to use two separate outputs for each process (separating stderr and stdout), because each output of each process might operate asynchronously, and if more than one stream goes to a single file handle, the data might get interleaved in ways you wouldn't expect or want (e.g. a stderr message in the middle of a stdout line).

Replies are listed 'Best First'.
Re^2: How to capture process and redirect STDOUT in a hash
by thanos1983 (Parson) on Jan 02, 2015 at 08:41 UTC

    Hello graff,

    Thank you for your time and effort reading and replying to my question. I was reading about that too, and actually I have manage to create a solution that writes the data into a file and then I open that file and process the data according to my needs.

    What I was hopping to achieve is to avoid opening this file read, open another file process the data and write. I was hoping that there is a way to capture the STDOUT and process it before is stored to the file.

    The way that easily can be done to write to the file is through the parameters that you set at the beginning. I am posting the solution just in case someone in the future might be interested on that.

    Sample of code is provided under:

    open my $stdout_fh, '>>', 'test.log' or die $!; foreach my $hash ( @mps ) { $pssh->add_host( $ini{$hash}{host} , user => $ini{$hash}{user}, port => $ini{$hash}{port}, password => $ini{$hash}{psw}, default_stdout_fh => $stdout_fh ); }

    It also possible to store the STDOUT to different file(s). It could be extremely easy just by adding another parameter into my conf.ini file and add a hash value on the foreach loop where you add the devices to prove. By doing that you can have different STDOUT on different device(s). Of Course by doing that you also need to add an open file process with the hash on the same foreach loop. Final step at the end of the process after write you need to close the files with a loop again.

    It might sound complicated but in reality is extremely simple.

    Seeking for Perl wisdom...on the process of learning...not there...yet!
      Here's a simple demonstration that uses variables as the storage for output file handles. You should be able to set up this sort of HoH (or AoH?) to keep track of the distinct "output file handles" of the various child processes, and handle the resulting output data in a simple, comprehensive way.

      For this example, I'm just using some random time stamps as keys for each log, but you could use anything that makes sense for your app. Again, I'd be inclined to use separate variables for stderr and stdout of each child, but maybe that's not necessary in your case.

      #!/usr/bin/perl use strict; use warnings; my %logs; for ( 0 .. 2 ) { my $id = time(); open( $logs{$id}{fh}, '>', \$logs{$id}{var} ) or die "open failed +on # $_: $!\n"; sleep int(rand(3)) + 1; # (i.e. for a small but variable number o +f seconds) } printf "log-file ids are: %s\n\n", join( " ", sort keys %logs ); for ( 1 .. 12 ) { my $id = ( keys %logs )[ int(rand(3)) ]; print "Sending entry # $_ to log $id\n"; print {$logs{$id}{fh}} "this is log event # $_\n"; } print "\n"; # How many entries per log? for my $id ( sort keys %logs ) { my @entries = split( /\n/, $logs{$id}{var} ); printf "log_id %s got %d entries\n", $id, scalar @entries; } print "\n"; # Which log got entry #4? for my $id ( sort keys %logs ) { next unless ( $logs{$id}{var} =~ /4/ ); print "Here is the log for $id, containing the fourth entry:\n$log +s{$id}{var}\n"; }
      (Minor update: I changed the numeric range in the second "for" loop, so that entry # 4 is also the fourth entry.)