in reply to Pipes? Child Processes?

One very nice thing about using pipes is that it gives you a very easy way to make use of tools that already exist for specific tasks. If there is already a process that works on a command line and does a particular kind of data filtering, then you don't need to link in an extra library, or install another module, or (God Forbid) rewrite that filtering process in your own perl code; just use that existing command-line utility as part of a pipe.

Most of the beauty of the classic UNIX command line utilities -- many of which have been diligently built into the perl core (sort, grep, sed, cut, paste, ls, find, ...) -- is that each one by itself does some simple thing very well, with an appropriate range of flexible options for tweaking its behavior; and pipeline commands let you plug these things together in various ways to perform a vast range of useful things, with just the shell command line as your programming language.

When you apply this sort of tool design to things like signal or image processing, the payoff is dramatic; a half-dozen or so basic utilities -- each of which is fairly simple, offers a handful of parameterized options, and uses a common notion of what to expect on stdin and what to produce on stdout -- will give you a virtually unlimited toolkit. (Well, there are some subtelties involved, and it can get a bit complicated, but the basic idea is still a big win compared to writing a new program every time you have arrange a given set of operations in a different sequence.)

In essence, as you make heavier use of pipelines, in your perl code and/or on the command line, you'll find that you spend less time (re)writing program code for the next task.