in reply to Re: Re: What is the fastest pure-Perl implementation of XXX?
in thread What is the fastest pure-Perl implementation of XXX?

I agree that a divide and conquer algorithm will eventually win for large numbers, but there is a big multiplier to overcome at moderate numbers:
use Benchmark qw(:all); sub recurse { #divide and conquer unshift @_, 1 if 2 != @_; my ($m, $n) = @_; if ($m < $n) { my $k = int($m/2 + $n/2); return recurse($m, $k) * recurse($k+1, $n); } else { return $m } } sub iterate { my $result = 1; $result *= $_ for 2..shift; return $result } cmpthese(10_000, { 'recurse' => sub { recurse( 160) }, 'iterate' => sub { iterate( 160) }, });
yields
Benchmark: timing 10000 iterations of iterate, recurse... iterate: 2 wallclock secs ( 1.53 usr + 0.00 sys = 1.53 CPU) @ 65 +35.95/s (n=10000) recurse: 23 wallclock secs (22.82 usr + 0.00 sys = 22.82 CPU) @ 43 +8.21/s (n=10000) Rate recurse iterate recurse 438/s -- -93% iterate 6536/s 1392% --
If one uses Math::BigInt for larger factorials, then results even out a bit because of all the function calls needed to do multiplication. But if one can stick to the builtin operators and minimize function calls, it is usually a huge win in perl5.

-Mark

Replies are listed 'Best First'.
Re: Re: Re: Re: What is the fastest pure-Perl implementation of XXX?
by tilly (Archbishop) on Mar 31, 2004 at 18:52 UTC
    You're doing the computation in a way that entirely misses my point.

    By using the built-in operators, you are using floating point, which means that you are only keeping track of a fixed number of digits. Therefore no matter how large the factorial gets (until you overflow floating point), the iterative algorithm never slows down. Divide and conquer will never win because the problem that it is trying to alleviate - that multiplying large numbers takes a lot of operations - never arises. If you're willing to accept an approximation, then you can just use Stirling's formula and go straight to an answer.

    But if you ask for precise calculations, whether in a pure Perl implementation or in some lower-level library, my algorithmic comment holds. (Any slowness of pure Perl calls is on top of that.) Try it calculating 1000! or 10_000!. You don't even need to use Benchmark - the time for one calculation becomes painfully visible.

    I assumed, of course, that we were looking for precise calculations...

      And you are still missing my point: asymptotic analysis is all very well and good, but if you want to optimize your code for speed, you have to pay attention to the constant multipliers.

      Suppose I use exact arithmetic:

      There is an important lesson here: The algorithm with the best asymptotic behavior isn't always faster. If speed is important, decide the parameter domain in which the algorithms are to be used, and benchmark the possibilities.

      -Mark