in reply to Code Efficiency: Trial and Error?
When optimizing, you have to ask yourself "How much time will this really save?", and weigh that against "How maintainable is each method?".
And then there's "How much time do I have to play with this?" :-)
I was recently trying to optimize some homegrown Korn shell menu scripts of ours where at one point the user had to wait for about 10 seconds for a prompt (I did end up getting it down to 2-3 seconds). There was one function which was looking for the alias (filename) for a certain customer code (last field on a certain line). The function was doing this:
So it was grep'ing through all the files in a directory even though it could quit when it hit the first matching line. So I rewrote it as this (also realizing that the target line always started with 'Customer Code'):grep " $code$" * | awk -F: '{print $1}'
But when you hit the 'Customer Code' line in one file, and it's not the one you want, you'd like to close the file right there and move on to the next file, especially because the 'Customer Code' line was always the second or third line in a 40-100 line file. gawk has a 'nextfile' function which does this, but I'm stuck with awk for now. So let's try perl:awk '/^Customer Code.* '$code'$/{print FILENAME; exit}' *
This on the average, goes twice as fast as the original, but at the cost of readability (especially since no one else here knows perl all that well). And then it turns out that this function was not even called during that particularly slow prompt, and was only being called once per execution (in another place), so I'd be saving a whole 0.03 seconds (which the user wouldn't even notice) by doing this with perl. But I'm leaving the perl in for now, along with the old line commented out, with a comment to the effect of "it's alot more fun doing it this way" :-)perl -e ' $site=shift; opendir DIR, "."; @ARGV = readdir DIR; closedir DIR; while(<>) { if (/^Customer Code.* (\w+)$/) { print("$ARGV\n"),exit if $site eq $1; close ARGV; } }' $code
Update: As a (hopefully) final note, even though the above code wasn't slowing down the especially slow prompt, I did finally speed up that slow prompt to being almost instantaneous by replacing the offending section with perl. The problem was that there were about 90 'customer' files, and it was fork/exec'ing grep/awk/etc for each file, so I just read each file, saved what I needed in a hash array, and printed the thing out at the end:
So some things are worth optimizing for. It saves only about 3 seconds (10 from the original) in actual time, but the annoyance it saves is priceless :-)site_list=$( perl -e ' while (<>) { if ($ARGV eq "/etc/uucp/Systems") { $system{$1} = undef if /^(\w+)/; next; } close ARGV, next unless exists $system{$_}; $system{$ARGV} = $1, close ARGV if /^Customer Code.*\s(\w+)\s*$/ +; } $,=" "; print sort(values %system),"\n"; ' /etc/uucp/Systems *)
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Code Efficiency: Trial and Error?
by Abigail-II (Bishop) on Oct 14, 2002 at 12:39 UTC | |
by runrig (Abbot) on Oct 14, 2002 at 20:08 UTC | |
|
Re^2: Code Efficiency: Trial and Error?
by Aristotle (Chancellor) on Oct 14, 2002 at 13:39 UTC | |
by runrig (Abbot) on Oct 14, 2002 at 20:23 UTC |