Update { should have benchmarked properly, apparently.. rest of this post largely invalidated by runrig's reply. }
Abigail-II's proposition of grep -l does not fit your specs as it will still scan all files, but your Perl can be simplified.
But why all that? A shortcircuiting sed script to find a customer's code: sed 's/^Customer Code[^ ] //; t done; d; : done; q;' FILE Wrap some sh around it and it does the job:perl -e ' $site = shift; for $f (@ARGV) { local @ARGV = $f; /^Customer Code/ && last while <>; / \Q$site\E$/ && (print("$f\n"), last); } ' $code *
(Wrote this in bash, not sure if it works 1:1 in Korn, but it should be easy to port anyway.)for ALIAS in * ; do [ "`sed 's/^Customer Code[^ ]* //; t done; d; : done; q' "$ALIAS"` +" = "$code" ] && break done echo "$ALIAS"
Update: Btw: when you're passing {print $?} to awk, it's a sign you really wanted to use cut - in your case, that would be cut -d: -f1
Makeshifts last the longest.
In reply to Re^2: Code Efficiency: Trial and Error?
by Aristotle
in thread Code Efficiency: Trial and Error?
by Tanalis
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |