in reply to Algorithm Efficiency vs. Specialized Hardware?
Benchmark: timing 1000000 iterations of Grep, Max, Ternary...
Grep: 4 wallclock secs ( 3.02 usr + 0.00 sys = 3.02 CPU) @ 331125.83/s (n=1000000)
Max: 7 wallclock secs ( 5.79 usr + 0.00 sys = 5.79 CPU) @ 172711.57/s (n=1000000)
Ternary: 5 wallclock secs ( 5.72 usr + 0.00 sys = 5.72 CPU) @ 174825.17/s (n=1000000)
Which is very different than what I had before. To shed some light on this, let me first point out that the code above is almost but not quite the code I got my previous results from.
The only difference is, my code put each snippet to be benchmarked inside a sub, and then game Benchmark the function calls. I did this because I thought it more acurately reflected real-world max() usage, but this is certainly not true for all programs. Also different is that I changed my to our, so as not to change the function calls or add use vars. Would this make any difference speed difference? I know our is new to 5.6.0, and I am already loving it, but I don't know it's internals.
Now I'll take Russ's code given above and just make that one change. The function calls are SO expensive compared to the rest of the code, that I will turn down the iterations to 10_000, but we will see it take much, much longer. Here is the code:
#!/usr/bin/perl -w use strict; our $now = 8; our %url = ( monday => { @{[map(($_,1), (1..1000))]} } ); use Benchmark; timethese(10000, { Grep => q{ Grep(); }, Ternary => q{ Ternary(); }, Max => q{ Max(); } }); sub Grep { $now = (sort grep {$_ <= $now} keys %{$url{'monday'}})[-1]; } sub Ternary { $now = ($now < $_ && $_ < 8 ? $_ : $now) for keys %{$url{'monday'} +}; } sub Max { foreach ( keys %{$url{'monday'}} ) { $now = $_ if $_ > $now }; }
And the results:
Benchmark: timing 10000 iterations of Grep, Max, Ternary...
Grep: 72 wallclock secs (68.98 usr + 0.00 sys = 68.98 CPU) @ 144.97/s (n=10000)
Max: 64 wallclock secs (60.01 usr + 0.00 sys = 60.01 CPU) @ 166.64/s (n=10000)
Ternary: 66 wallclock secs (63.24 usr + 0.01 sys = 63.25 CPU) @ 158.10/s (n=10000)
And so this is more consistant with my earlier resutls.
Platform is an AMD K6-II/400 128megs SDRAM, running linux 2.2.12 and perl 5.6.0.
I really don't understand these results. I understand that function calls are very, very expensive( the reason OO is often slow; accessor methods vs lookups ), but I don't understand why the proportions are different. Why doesn't the function call add the same amount of proc time for each call? Or is there something else?
Paris Sinclair | 4a75737420416e6f74686572 pariss@efn.org | 205065726c204861636b6572 I wear my Geek Code on my finger.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
RE:(2) Algorithm Efficiency (function overhead)
by Russ (Deacon) on Jun 21, 2000 at 00:35 UTC | |
by btrott (Parson) on Jun 21, 2000 at 00:53 UTC |