I think you've misunderstood what I'm after.
I want to execute multiple commands, piped together, but with error checking, STDERR capture, and the like, from within my scripts, hence the comments about the IPC:: modules and piped open()s, not make single scripts behave nicely as part of a pipeline.
Cheers.
BazB.
Update to pg's update: the suggestion to read the output from the first command into an array will not work.
That data could be upto ~30 million lines or ~50Gb.
That's the whole point of pipes - you don't have to be able to store the whole dataset in memory, nor waste time writing intermediate stages to disk.
The potential size of the input is also why I use while loops in my current code, although read() would probably be more efficient, since the data consists of fixed-length records.
Doing this in the shell directly might be easier, but the benefits of using Perl for building the application are more of an issue.
Update 2: Ah ha! Me thinks pg(++!) has cracked it.
pipe() seems to be the way to go. I'm rather surprised that I'd not come across it before.
In reply to Re: Re: Robustly piping several processes.
by BazB
in thread Robustly piping several processes.
by BazB
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |