in reply to Re: Code Efficiency: Trial and Error?
in thread Code Efficiency: Trial and Error?

I wouldn't have used perl or awk, but stayed with grep: grep -l would have done the same as your Perl script. But likely to be more efficient.

Abigail

Replies are listed 'Best First'.
Re: Re: Code Efficiency: Trial and Error?
by runrig (Abbot) on Oct 14, 2002 at 20:08 UTC
    I like the simplicity of grep -l though as Aristotle points out, it still scans all files (though its probably what I'll end up with just for the sake of maintenance, and '-l' short circuiting the match within each file is 'good enough'). If I just look for /_$code$/ then it is about as fast as the perl script when all the files need to be scanned anyway (and perl isn't all that much quicker even when the match occurs within the first few files). But when I change it to "^Customer Code.* $code$" then it is (~3x) slower. grep and sed are good at very simple regexes, but perl seems to outperform them when they become even mildly complex.