Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Re: Algorithm for cancelling common factors between two lists of multiplicands

by QM (Parson)
on Aug 11, 2005 at 23:33 UTC ( [id://483148]=note: print w/replies, xml ) Need Help??


in reply to Algorithm for cancelling common factors between two lists of multiplicands

I have toyed with this more, and have a pure Perl solution that runs your example in about 7 minutes on my PC, and all but 5 seconds is the factoring by Math::Big::Factors. I have an idea about prefactoring up to some limit, which I'll report on later.

You might note that 2**32 is not a practical limit, unless most of the terms cancel out. For instance, if you had to compute (2**32)!/(2**31)!, that would be 2**31 terms in the numerator.

Can you comment on the practical limits of your application?

For your example, which is essentially:

10389! 4570! 44289! 11800! ------------------------------ 56089! 989! 9400! 43300! 2400!
and comparing results:

8.070604647867604097576877675243109941805e-7030 8.070604647867604097576877675243109941729476950993498765160880e-7030
(Note that MBF actually moved the decimal and gave the exponent as "-7069" -- I've normalized it to the other result.)

Here's my code:

#!/your/perl/here use strict; use warnings; use Benchmark; use Math::Big::Factors; my $ORDER = 5; # preserve this many significant digits Math::BigFloat->accuracy(40); my $ONE = Math::BigFloat->new(1); my $t0 = new Benchmark; my @num = ( 10389, 45700, 44289, 11800 ); my @den = ( 56089, 989, 9400, 43300, 2400 ); my $t1 = new Benchmark; time_delta("startup", $t1, $t0); # expand terms my (@num_fac, @den_fac); foreach my $n ( @num ) { push @num_fac, 2..$n; } foreach my $n ( @den ) { push @den_fac, 2..$n; } my $t2 = new Benchmark; time_delta("expand", $t2, $t1); my $quotient = cancel_quotient( \@num_fac, \@den_fac ); my $t3 = new Benchmark; time_delta("cancel_quotient", $t3, $t2); warn "factoring ", scalar keys %{$quotient}, " terms\n"; my $factors = prime_factor_hash( $quotient ); my $t4 = new Benchmark; time_delta("prime_factor_hash", $t4, $t3); # evaluate terms my $q = $ONE->copy(); #print "Factors\n"; foreach my $t ( sort { $factors->{$b} <=> $factors->{$a} } keys %{$fac +tors} ) { my $bft = $ONE->copy()->bmul($t)->bpow($factors->{$t}); $q->bmul($bft); # print "$t**$factors->{$t}: $q\n"; } print $q->bsstr(), "\n"; my $t5 = new Benchmark; time_delta("multiply", $t5, $t4 ); time_delta("All", $t5, $t0 ); exit; ######################################## # cancel like terms between 2 arrays # similar to unique array element sub cancel_quotient { my $num = shift; my $den = shift; my %quotient; # determine the residual terms after cancelling # positive terms are in numerator # negative terms are in denominator # zero terms are filtered foreach my $n ( @{$num} ) { $quotient{$n}++; } foreach my $n ( @{$den} ) { $quotient{$n}--; } foreach my $n ( keys %quotient ) { delete $quotient{$n} unless $quotient{$n}; } return \%quotient; } ####################################### # factor keys of hash, and duplicate "value" times sub prime_factor_hash { my $quotient = shift; my %factors; foreach my $n ( keys %{$quotient} ) { my @factors = Math::Big::Factors::factors_wheel($n,$ORDER); foreach my $f ( @factors ) { $factors{$f} += $quotient->{$n}; } } return \%factors; } ########################################### # nice benchmark timestamp printer sub time_delta { print +shift, ": ", timestr(timediff(@_)), "\n"; }
Update: I ran some variations, here's the time comparisons:

Order = 5, Accuracy = 40, time = 7 min
Order = 6, Accuracy = 60, time = 35 min (!!)
no factoring, Accuracy = 40, time = 28 sec
no factoring, Accuracy = 60, time = 30 sec
no factoring, Accuracy = 40, eval, time = 14.0 sec
no factoring, Accuracy = 60, eval, time = 14.7 sec

The "eval" version changes this code:

foreach my $t ( keys %{$factors} ) { my $bft = $ONE->copy()->bmul($t)->bpow($factors->{$t}); $q->bmul($bft); }
to this:
my $s = '$q'; foreach my $t ( keys %{$factors} ) { next if ( $factors->{$t} == 0 ); if ( $factors->{$t} > 0 ) { $s .= "->bmul($t)" x $factors->{$t}; } else { $s .= "->bdiv($t)" x abs($factors->{$t}); } } eval $s; die "Error on eval: $@" if $@;
I tried it with ->bpow(), but that's 2X slower. (I'd offer that ->bpow() should only be used for non-integer powers.)

Seems eval has its uses!

Update 2: I made a typo in above on bdiv -- should have used abs. It's been changed, and the timings updated (roughly doubled).

-QM
--
Quantum Mechanics: The dreams stuff is made of

Replies are listed 'Best First'.
Re^2: Algorithm for cancelling common factors between two lists of multiplicands
by BrowserUk (Patriarch) on Aug 12, 2005 at 02:59 UTC

    My best performing pure perl attempt so far runs that testcase in just over 8 seconds:

    [ 3:24:33.57] P:\test>fet3 989 9400 43300 2400 [989 9400 43300 2400] 0.80706046478676263e-7029 1 trial of _default ( 8.164s total), 8.164s/trial

    The precision is limited to that of a standard double, but I played some games with scaling during the final calculation in order to achieve a range beyond that of a 1e±308 of a double.

    Theortically, it will now handle some pretty big numbers, but the limiting factor is memory as I am generating some pretty big lists. I should be able to alleviate that substantially by implementing Limbic~Region's idea of cancelling the factorials against each other before generating the product lists rather than building all the lists and then cancelling them as I am now. Ie.

    10389! 4570! 44289! 11800! ------------------------------ 56089! 989! 9400! 43300! 2400! becomes 44289! 11800! 10389! 4570! ------------------------------ 56089! 43300! 9400! 2400! 989! (||10389..9401) (||4570..2401) --------------------------------------------- (||56089..44290) (||43300..11801) 989! where (||m..n) indicates the product of the series

    In this example that reduces the number of factorisations from 183,406 to 47,444, which ought to make a big difference both on speed and memory consumed, and these saving would be even greater for larger observations in all but the most pathelogical of cases.

    I've had some trouble getting that to work properly though. Encoding the heuristics that seem obvious by inspection of one or two cases, into a generalised algorithm for all inputs, is eluding me for the moment.

    You might note that 2**32 is not a practical limit, unless most of the terms cancel out.

    I agree that 2**32 is very much an upper bound. The main reason for stating it is that it pretty much precludes the benefit of many of the more sophisticated factorisation methods I've read about. It also places a practical upper bound on the number of primes I need.

    The code (large because of the embedded primes list)


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
      Don't factor!

      I removed map{ primeFactors $_ } from your FET3 sub, and it runs 5X faster! (The last 4 digits differ between the 2 versions.)

      -QM
      --
      Quantum Mechanics: The dreams stuff is made of

        Don't factor!
        Indeed, factoring saves time only when almost all of the factors can be canceled. Otherwise, the factors will just be multiplied back together – at great cost – to arrive back at nearly the original terms (magnitude wise), which then must be multiplied with each other to yield the final product.

        For example, if we start with 12 and factor it into 2x2x3, do we win or lose if we can cancel only one term? Two BigInt multiplications (e.g., 2x3xN, where N is large) is probably more costly than the original single multiplication (12xN), which makes our factoring a net loss, even though 2x3=6 is smaller than 12. Simply put, when it comes to run time the number of multiplications is what matters most.

        We probably win only when we can cancel all or all but one of the factors. Given that our numerators and denominators are often greatly unbalanced, how often will that happen? Our emprical results suggest not enough to be worthwhile. More likely, we will end up with one side of the dividing line plagued with little factors, like aphids on a rosebud, each sucking up an expensive multiplication.

        With that insight, I removed all the factoring and cancelling code and came up with this version:

        #! perl -slw use strict; use Benchmark::Timer; my $T = new Benchmark::Timer; use List::Util qw[ sum reduce max ]; our( $a, $b ); sub factors{ 2 .. $_[ 0 ] } sub normalise { my( $s, $n ) = @{+shift }; while( 1 ) { if( $n > 1.0 ) { $n /= 10; $s++; redo; } elsif( $n < 0.1 ) { $n *= 10; $s--; redo; } else { last } } return [ $s, $n ]; } sub sProduct{ @{ reduce{ $a->[ 0 ] += $b->[ 0 ]; $a->[ 1 ] *= $b->[ 1 ]; normalise( $a ); } map{ [ 0, $_ ] } 1, @_ }; } sub FET4 { my @data = @_; return unless @data == 4; my @C = ( sum( @data[ 0, 2 ] ), sum( @data[ 1, 3 ] ) ); my @R = ( sum( @data[ 0, 1 ] ), sum( @data[ 2, 3 ] ) ); my $N = sum @C; my( $dScale, $d ) = sProduct map{ factors $_ } grep $_, @R, @C; my( $sScale, $s ) = sProduct map{ factors $_ } grep $_, $N, @data; return ( $d / $s, $dScale - $sScale ); } die "Bad args @ARGV" unless @ARGV == 4; print "[@ARGV]"; $T->start(''); printf "%.17fe%d\n", FET4 @ARGV; $T->stop(''); $T->report; <STDIN>; exit; __END__ P:\test>fet4 989 9400 43300 2400 [989 9400 43300 2400] 0.80706046478686522e-7029 1 trial of _default ( 2.851s total), 2.851s/trial P:\test>FET4 5 0 1 4 [5 0 1 4] 2.38095238095238090e-2 1 trial of _default ( 564us total), 564us/trial

        Which, if you can live with the lower accuracy, for the (5 0 1 4) and (989 9400 43300 2400) datasets, compares favourably with my (rough) timings of tmoertel's compiled Haskell code, the Math::Pari version and blows the M::BF version into dust.

        Where it gets really interesting is when you look at bigger numbers. I tried it with (1e5 2e5 2e5 1e5) and it took 81 seconds.

        P:\test>FET4 1e5 2e5 2e5 1e5 [1e5 2e5 2e5 1e5] 1.32499459649722560e-14760 1 trial of _default ( 81.148s total), 81.148s/trial

        Math::Pari can't handle numbers this big, and the Haskell version ran for roughly an hour before I abandoned it.

        It makes me think that maybe a Math::Big library based around the idea of the scaled math I use in sProduct() might be useful for large number work that doesn't require absolute precision?


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.

        I hadn't tried that. Thanks.

        Prior to replacing product() with sProduct(), my earlier non-factoring versions would not have produce a result because the intermediate terms blew the range of a double. That's what originally necessitated the introduction of M::BF and yielded the 4 1.2 hrs runtime.

        sProduct() effectively gives me a cheap "BigFloat", and negates the need for the factoring.

        Just goes to show there is more than one way to skin a cat :)


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
      I am not sure if you had a chance to look at my node Re^7: Algorithm for cancelling common factors between two lists of multiplicands

      I guess my approach is same as Limbic~Region as I was using subtraction of lower factorial terms with higher ones.

      I copied the input list from one of your previous examples and I have  45700 instead of 4570 but aside from that the implementation should be easy to understand.

      I guess if you sort numerator and denominator and subtract them you should be all set. I have not proved this formally but here is a stab at a simple proof that shows sorting and subtracting should work...

      Let the fraction be

      X! Y! Z! ------- a! b! c! WLOG let's assume that X > Y > Z and a > b > c. Also let's assume that + b > Y , X > a (weaker assumption: Z > c)...<p> NOTE: if we divide X!/a! we will have (X-a) elements <p> To Prove: (X-a) + (Z-c) + (b-Y) is the shortest list one can find or in other words (X-a) + (Z-c) + (b-Y) <= (X-p) + (Z-q) + (r-Y) for any p,q,r in permut +ation(a,b,c) Proof: From the above equation since b > Y and Z > c, r should be equal to ei +ther a or b. If r = b then the solution is trivial<p> If r = a then we get (X-a) + (Z-c) + (b-Y) ?<= (X-b) + (Z-c) + (a-Y) canceling terms -a - c + b ?<= -b -c + a -a + b ?<= -b + a ====> YES since a > b we see that r = a is not the smallest list so r = b<p> Similarly we can also show that (X-a) + (Z-c) + (b-Y) <= (X-a) + (Y-c) + + (b-Z)
      I don't think this is a rigourous proof this method but i sort of feel sorting and subtracting should give us what we need...

      cheers

      SK

      PS: I think there will be 47448 elements and not 47444 as you suggested? as you need to count the first element too..

        Sorry sk. I try to always attribute ideas, but it's been a long thread, several days and I've had other conversations beyond the thread. I may have misremembered the sequence of things.

        For the algorithm, the problem is the implementation not the sort'n'subtract. With 4 values on top and 5 underneath, there are numerous ways in which the subtraction needs to be done. Ie.

        a b c d (b-w)(d-y) (a-v)(c-x) 1 --------- might give ----------- or ----------- or ------------------- +-- or ... v w x y z (v-a)(x-c)z (w-b)(y-d)z (v-a)(w-b)(x-c)(y-d +)z

        And for low numbers, the time spent ordering and eliminating outweigths the benefits.

        It's just a case of coding the mechanism in an economic fashion.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
Re^2: Algorithm for cancelling common factors between two lists of multiplicands
by hv (Prior) on Aug 13, 2005 at 00:20 UTC

    Here's my take on it. Highlights: use Eratosthenes' sieve to determine a factor or primality for integers up to the max required; work out when an additional numerator or denominator kicks in (@switch), and pass over the numbers in reverse order, so each need be visited only once; build up two BigInts for numerator and denominator and just switch to BigFloats to divide the two at the end.

    The single division at the end takes over 2/3 of the total time, so I'd expect using GMP or Pari for the maths would cut it greatly:

    zen% perl t1 807060464786760409757687767524310994172947695099349876516088e-7089 sieve: 0 wallclock secs ( 0.09 usr + 0.02 sys = 0.11 CPU) build: 4 wallclock secs ( 4.38 usr + 0.00 sys = 4.38 CPU) divide: 9 wallclock secs ( 8.90 usr + 0.00 sys = 8.90 CPU) total: 13 wallclock secs (13.37 usr + 0.02 sys = 13.39 CPU) zen%

    Things that made negligible difference: dropping accuracy to 40 (!); initialising my $bii = Math::BigInt->new($i) before the multiply loops; the $r->bsstr call.

    I noticed that the numerator consists of 855 prime factors comprising 755 distinct primes; the denominator has 2308 of which 2306 are distinct. No surprise, then, that bpow() isn't a win.

    Here's the code:

    Hugo

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://483148]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others goofing around in the Monastery: (4)
As of 2024-04-19 16:02 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found