in reply to Re: Matching First Character of Strings Efficiently
in thread Matching First Character of Strings Efficiently

Including 'expensive_function' in a benchmark means that you are mostly measuring how often that function gets called (which isn't what you asked for) and reduces the differences in the remaining items to the level of noise (which a worse benchmark).

I commented out the guts of expensive_function, shortened the names, added tye2 which does what I thought was the obvious optimization of pulling out the first char of $str_a at the top:

'tye2' => sub { my $f = ord($str_a); for my $str_b ( @list ) { next if $f != ord( $str_b ); expensive_function ( $str_a, $str_b ); } },
and got these results:
Rate hb1 hb2 L~R3 BUk L~R2 dlrm L~R tye1 kvale ty +e2 hb1 46.5/s -- -35% -35% -38% -60% -62% -70% -73% -73% -7 +4% hb2 71.4/s 53% -- -0% -6% -38% -42% -54% -59% -59% -6 +1% L~R3 71.4/s 53% 0% -- -6% -38% -42% -54% -59% -59% -6 +1% BUk 75.7/s 63% 6% 6% -- -35% -39% -51% -56% -56% -5 +8% L~R2 116/s 149% 62% 62% 53% -- -6% -25% -33% -33% -3 +6% dlrm 124/s 166% 73% 73% 63% 7% -- -20% -28% -29% -3 +2% L~R 155/s 233% 117% 117% 105% 34% 25% -- -10% -11% -1 +4% tye1 172/s 270% 141% 141% 128% 48% 39% 11% -- -1% - +5% kvale 174/s 274% 143% 143% 130% 50% 41% 12% 1% -- - +4% tye2 181/s 289% 154% 154% 139% 56% 46% 17% 5% 4% +--

But I still consider such nano-optimization games to be more a waste than of value. (:

- tye        

Replies are listed 'Best First'.
Re: Re^2: Matching First Character of Strings Efficiently (benchmarking)
by Limbic~Region (Chancellor) on Mar 15, 2004 at 22:58 UTC
    tye,
    Calculating the ordinal value of $str_a only once was an obvious optimization. I was hurrying to get out of the office - that will teach me.

    But I still consider such nano-optimization games to be more a waste than of value.

    I completely disagree with you. You don't seem to condemn golfing, which I feel is more of a waste (probably because I am no good at it). It is not like I was complaining that my solution wasn't fast enough. I feel that such games are of value for a few of reasons:
    • It can be fun
    • It can help disprove myths
    • Figuring out the most efficient way once allows it to be reused in the future effortlessly.
    Now I am not saying one should sacrifice readability in order to save a few cycles or that a person should spend more programmer time than they will save in run time. As a periodic academic excersise, I see it has a lot of value. Of course, YMMV since you know a lot more than I do.

    Cheers - L~R

      Perl Golfing is not an unmixed blessing. But most people are smart enough to realize that golfed solutions aren't usually the best choice when writing code that you care about.

      Unfortunately, many people seem to think that nano-optimized code is the best choice when writing code that you care about. You reinforce this yourself above.

      So, we get people spending way too much time thinking about and arguing about whether they should use substr or ord and then spreading the joy by nano-optimizing other things and in the end they spend an extra 4 hours on their code so that it runs 0.002 seconds faster (yet the total run time is 2.4 seconds).

      It'd be much better if people chose the code that was clear or easy to maintain.

      Just look at how much time has been wasted by people trying to replace s/x//g with tr.

      The cult of nano-optimization is a seductive one. This is why several people have very strong aversion to micro-/nano-optimization.

      - tye        

Re: Re^2: Matching First Character of Strings Efficiently (benchmarking)
by graff (Chancellor) on Mar 16, 2004 at 01:11 UTC
    Rate ... hb1 46.5/s ... ... tye2 181/s ...
    But I still consider such nano-optimization games to be more a waste than of value.

    Since when is a 4-to-1 speed-up "nano"? Granted, in the OP's case, maybe this particular 4:1 ratio would end up affecting only 10% of the overall run-time, once "expensive_process" (whatever it is) plays its role. Also, I wouldn't really want to bust my butt running an exhaustive test of 12 alternatives -- just compare the first (or current) idea against the next idea that looks like it could do better, in case the current idea seems to be taking longer than I like.

    But given that this sort of vetting task is likely to come up, if I had to do it over an input list of, say, 2 million items, I'd much prefer that it take 3 hours rather than 12.

    update: Having seen the replies to my post from tye and Aristotle, I have to say (sheepishly) thanks for the wake-up -- I should have known better than to take this sort of benchmark tabulation at "face value". In fact, I have used Benchmark myself on other problems -- but I opted for the "timethese" method (not "cmpthese"), which offers the advantage of showing the overall wall-clock difference as well as the "relative proportional" time difference for each approach. In one attempt, a comparison of two approaches showed "5.44/s" for one and "94.34/s" for the other -- like wow, huh? -- but the wall-clock difference was 2 sec vs. 9 sec... go figure. If an app really needs to be optimized, the first tool of choice should be profiling, of course, to see what segment of the code is eating all the time, so you can rethink that part first.

      Since when is a 4-to-1 speed-up "nano"?

      It is called micro- or nano-optimization because the operation being optimized is so small that it is likely to account for closer to 0.1% or less of the total run time (instead of the 10% you guessed) of most non-trivial scripts, not because Benchmark.pm measures a small percentage difference.

      Even if it accounted for 10% of the run time (extremely unlikely for an operation as small as comparing two characters), a 4-to-1 speed-up means only a 7.5% total speed-up. In reality, you are more likely to get something like a 0.075% speed up.

      I'd much prefer that it take 3 hours rather than 12

      See, this is why this cult is so seductive. No matter how many times someone explains why this type of stuff is almost never very useful, people see "Four times faster!" and start drooling. Using your own (highly optimistic) numbers, a 12-hour run would take 11.1 hours (not 3). I mention this not because I think 0.9 hours is a short time (because the reality will probably be less than even a few minutes saved for a 12-hour run) but to show how "Four times faster!" takes over one's thoughts and one jumps to invalid conclusions.

      Even if you had a script that did (well, tried to do) nothing but compare lots of leading characters of strings, you'd probably find that the overhead would end up adding up to a lot more than the comparison time. Benchmark.pm does a lot of work to estimate the overhead and subtract it.

      I also think noone would try the slowest methods benchmarked here except in a rather desperate attempt to nano-optimize, so even the 4-to-1 speed-up isn't real. It is a near-to-4-to-1 slow-down of a desperate optimization attempt.

      - tye        

      Very well, now look at the other results. The solution both I and kvale proposed is the most straightforward, arguably most maintainable, and least "creative" one. It comes in third, much faster than the slowest, very "clever" solution, and achieves over 80% of the speed of tye's winning entry. Do you see anything wrong with your picture yet?

      The reality is that "boring" solutions chosen with a clear understanding of the task at hand are usually plenty fast enough. In my experience, they tend to achieve close to 85% of the possible performance.

      Given the points already made by tye about the reality of the impact of microoptimizations, there is very little worth in evaluating the obscure. Go with the maintainable, clear, concise solution and worry only if performance is actually insufficient.

      And even if you have gotten there, it is still more worthwhile to optimize your expensive_functions instead.

      Makeshifts last the longest.