in reply to How to capture compile errors from child program?

The easy way to do external calls is using backticks. You could easily add the STDERR redirect (2>&1) to your command if you need to capture STDERR; otherwise it just interleaves the child's STDERR with the parent program's STDERR. As soon as you actually want to keep the various streams separate, I usually jump to IPC::Open3 or a full blown pipe/fork/exec as described in perlipc. There are popular wrappers out there that are supposed to handle this, e.g. Capture::Tiny, but I haven't gotten much utility from them.

#11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.

Replies are listed 'Best First'.
Re^2: How to capture compile errors from child program?
by anonymized user 468275 (Curate) on Aug 03, 2015 at 13:10 UTC
    backticks may be easy and perform the same as 'open' on Windows, but backticks perform worse on *nix for some reason. The following code:
    use strict; use warnings; use Time::HiRes qw(time); use POSIX qw(strftime); timestamp(); my $fred = `echo plenty of fish`; timestamp(); warn $fred; timestamp(); open my $fh, 'echo ' . $fred . ' |'; $fred = <$fh>; close $fh; timestamp(); warn $fred; timestamp(); sub timestamp { my $t = time; my $date = strftime "%Y%m%d %H:%M:%S", localtime $t; $date .= sprintf ".%03d", ($t-int($t))*1000; print $date, "\n"; }
    produced the following results on three platforms:

    Windows: backtick: 12ms open: 11ms

    SunOS: backtick: 17ms open: 8ms

    Debian: backtick: 1ms open: 0ms

    It doesn't look too significant with the trivial subprocess above, but if the subprocess is heavier the difference increases disproportionately on Debian - enough to change site standards.

    One world, one people

      First, it's not the weight of the subprocess, it's the number of invocations that would be problematic. Second, since you are only running your code once, there are all sorts of problems with your benchmark. Use Benchmark to actually test performance.
      #!/usr/bin/perl use strict; use warnings; use Benchmark qw(:all :hireswallclock) ; cmpthese(20, { 'Backtick' => sub{`echo plenty of fish` for 1 .. 10}, 'Open' => sub{for (1 .. 10){open my $fh, "echo plenty of fish |"; +<$fh>}}, });
      outputs
      Rate Open Backtick Open 9.29/s -- -1% Backtick 9.35/s 1% --
      on my Windows box and
      Rate Open Backtick Open 85.8/s -- -10% Backtick 95.7/s 11% --
      on a Linux server (with iteration count upped to be meaningful).

      As an aside, if you are optimizing away milliseconds of overall run time, you probably shouldn't be using Perl.


      #11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.

        Thanks for the tip about benchmark. But I don't agree one should abandon Perl just because one measures some snippet execution in milliseconds. What you are most missing here is that one of the main reasons to use Perl is to avoid shelling out (or in) at all, in the first place. So those milliseconds can and will be optimised away with Perl! Conversely it is much more of a challenge (if even remotely possible) to do so with scripting languages that are not based on C, especially re Perl access to C libraries.

        One world, one people