http://qs1969.pair.com?node_id=483008


in reply to Re^5: Algorithm for cancelling common factors between two lists of multiplicands
in thread Algorithm for cancelling common factors between two lists of multiplicands

Much nicer. Falling over to using the approximation only when needed makes a lot of sense.

BTW: You have a couple of artifacts left from the earlier caching scheme

my $max_arr; my @ln_fact; ... BEGIN { $max_arr = 10000; $#ln_fact = $max_arr - 1;

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
  • Comment on Re^6: Algorithm for cancelling common factors between two lists of multiplicands
  • Download Code

Replies are listed 'Best First'.
Re^7: Algorithm for cancelling common factors between two lists of multiplicands
by tlm (Prior) on Aug 11, 2005 at 17:32 UTC

    IMO, the most significant improvement in the latest version of the code is that it computes at most 18 factorials altogether (9 in the one-sided case) per call to fishers_exact. After the first term in the sum, all subsequent terms are computed as multiplicative adjustments to the first term:

    $delta *= ( ( $aa-- * $dd-- )/( ++$bb * ++$cc ) );
    I have not seen any significant loss of accuracy from this in my tests.

    To put in perspective the effect of using Gosper's approximations only when Math::Pari::factorial bombs, instead of using a hard cutoff of 10000 for the switch-over, the relative difference (i.e. difference divided by average) between M::P::factorial(10_000) and Gosper's approximation to 10,000! is smaller than 1e-10, and it gets only better for larger arguments. Therefore Gosper's is more than adequate in this range, AFAIC.

    My main reason for removing the hard cutoff was just to simplify the code for posting. In my own production code I don't wait for M::P::factorial to fail before switching over to Gosper's, because for large arguments the benefits in terms of accuracy from using M::P::factorial are minuscule but the performance hit is substantial. For example, for 10,000! (relative error for Gosper's: < 1e-10):

    Rate pari gospers pari 604/s -- -94% gospers 9657/s 1500% --
    and for 100,000! (relative error for Gosper's: < 1e-12):
    Rate pari gospers pari 49.8/s -- -100% gospers 10001/s 19996% --

    Also, note that, as for the previous version of the code, this one returns a value very close to 1 for fishers_exact( 989, 9400, 43300, 2400 ).

    And thanks for the heads-up about the stray code in Fisher's Exact Test. Fixed.

    the lowliest monk