in reply to Re^4: Factorial algorithm execution time
in thread Factorial algorithm execution time

It won't work as well, but it works well enough. My benchmark for 5000 says:
Method 6: 14 wallclock secs (14.31 usr + 0.01 sys = 14.32 CPU) Method 7: 15 wallclock secs (14.28 usr + 0.01 sys = 14.29 CPU) Method 8: 14 wallclock secs (14.25 usr + 0.01 sys = 14.26 CPU)
I got rid of the Math::BigInt call on the list of 1 then ran it a bunch of times, and there seemed to be some fluctuation. Another run gave me:
Method 6: 14 wallclock secs (14.08 usr + 0.02 sys = 14.10 CPU) Method 7: 14 wallclock secs (14.14 usr + 0.01 sys = 14.15 CPU) Method 8: 15 wallclock secs (14.12 usr + 0.01 sys = 14.13 CPU)
Not bad for such a stupid approach, huh?

In truth yours is better by about as much as you lose because you used the overloading interface. If I use the overloading interface, well

sub fact6b{ my ($m, $n) = @_; if ($m < $n) { my $k = int($m/2 + $n/2); return my $x = fact6b($m, $k) * fact6b($k+1,$n); } else { return Math::BigInt->new($m); } }
is very straightforward to anyone who has seen divide-and-conquer before. And recursive is only bad because Perl penalizes function calls horribly.

Of course at this point we are both slow mainly because Perl doesn't implement a good algorithm for big integer multiplication.

Replies are listed 'Best First'.
Re^6: Factorial algorithm execution time
by Aristotle (Chancellor) on Oct 19, 2002 at 11:24 UTC

    I wasn't calling it a stupid approach. :) It just seems needlessly complicated. Keeping numbers small to accelerate this algorithm hadn't occured to me before you mentioned it.

    I used the overloading interface because calling bmul made Perl complain Can't call method "bmul" without a package or object reference and I was too lazy to figure it out at the time. Thanks for nagging; I took another look at the docs. Fixing it didn't really change anything though.

    #!/usr/bin/perl -w use strict; use Benchmark; use Math::BigInt; sub fact7b { my @factor = map Math::BigInt->new($_), (2 .. shift); while(@factor > 1) { my @pair = splice @factor, 0, @factor / 2; $_ = $_ * pop @pair for @factor[0..$#pair]; } return shift @factor; } sub fact7c { my @factor = map Math::BigInt->new($_), (2 .. shift); while(@factor > 1) { my @pair = splice @factor, 0, @factor / 2; $_ = Math::BigInt->new($_->bmul(pop @pair)) for @factor[0..$#p +air]; } return shift @factor; } timethese 100, { f7b => sub { fact7b 1500 }, f7c => sub { fact7c 1500 }, }; __END__ Benchmark: timing 100 iterations of f7b, f7c... f7b: 388 wallclock secs (381.29 usr + 1.90 sys = 383.19 CPU) @ + 0.26/s (n=100) f7c: 387 wallclock secs (380.56 usr + 1.89 sys = 382.45 CPU) @ + 0.26/s (n=100)

    Makeshifts last the longest.

      There is a measurable, if minor improvement. Which is about what your algorithm offers over mine.

      The reason that it is so small is that the connection between size and improvement is not that strong. You only save when you manage to reduce how many "digits" are involved, and at least with Perl 5.6's implementation of Math::BigInt, digits are of size 1e5. Making sure that only a few multiplications have huge numbers is most of what you can save. Multiplying big numbers is a bottleneck. You cannot avoid the bottleneck much more. Next you should speed it up.

      If you want to try a pure Perl solution, I suggest implementing Karatsuba multiplication. It isn't nearly as fast as Fourier transforms, but it is easier to understand and implement.