in reply to Re: Optimizing existing Perl code (in practise)
in thread Optimizing existing Perl code (in practise)

I think we've gone far off track and read much too much into the initial point here: one shoud avoid the kind of pretzel logic that results in superfluous system calls - a class of stupid coding practices one of which merlyn dubbed a useless use of cat.

F.ex, I recently saw this in a script: my @data = `some_tool foo bar params 2> /dev/null | sort`; Tell me where there's point in piping over to sort(1) here? You can just do it in Perl: my @data = sort `some_tool foo bar params 2> /dev/null`;

I have seen quite a bunch of cases where people shell out to awk or grep(1) from inside a Perl script, when what they were intending to do was perfectly doable in Perl and would have taken no more effort (if not less).

I cringe every time, and I do call that a peeve of mine - I think you would too.

Really, what's the point of writing:
my $text = do { open my $fh => $file or die "open: $!\n"; local $/; <$fh>; };
If you can just write: my $text = `cat $file`;
This example is just silly, IMHO. If I need `cat $file` more than once, I'll write a
sub slurp_file($) { open my $fh, (my $file = shift) or die "open $file: $!\n"; local $/; <$fh>; }
and next time it's slurp_file $file - no extra process, no extra typing, no loss of clarity. If I do need to slurp a file more than once. If not, of course, the effort is silly. But then, it's very unlikely that I will need to do that in spite of Perl offering the lovely @ARGV / diamond operator combo. In the remaining 0.5% of cases, sure, I concede that cat(1) would be the tool of choice.

Makeshifts last the longest.