http://qs1969.pair.com?node_id=482050


in reply to Re^2: Algorithm for cancelling common factors between two lists of multiplicands
in thread Algorithm for cancelling common factors between two lists of multiplicands

Can you provide a sample data set (i.e., matrix) that you consider to be "large"? Also, could you give me an idea of how long it takes to compute Pcutoff without using arithmetic optimizations? (I would like to try out a quick Haskell-based implementation I whipped up on a real data set.)
  • Comment on Re^3: Algorithm for cancelling common factors between two lists of multiplicands

Replies are listed 'Best First'.
Re^4: Algorithm for cancelling common factors between two lists of multiplicands
by BrowserUk (Patriarch) on Aug 08, 2005 at 22:58 UTC

    Sure. For the following 2x2:
      X Y  
    A 989 9,400 10,389
    B 43,300 2.400 45,700
      44,289 11,800 56,089

    The formula comes out to

    (44,289! 11,800!) (10,389! 45,700!) ----------------------------------- 56,089! 989! 9,400! 43,300! 11,800! 2,400!

    Which infinite precision will calculate, but it will be quite slow. And remember, in order to determine if the result is significant, there are 11,000 more of these calculations to perform and these numbers are still relatively small. And, theoretically at least, the FET can be applied to more than a 2x2 matrix.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
      For this matrix, I can compute the exact Pcutoff in about 1 second (on a 1.6-GHz Celeron laptop). How long does the brute-force approach take?
      [thor@arinmir fishers-exact-test]$ cat ex1.dat 989 9400 43300 2400 [thor@arinmir fishers-exact-test]$ time ./fet < ex1.dat > /dev/null real 0m1.007s user 0m0.991s sys 0m0.012s

        The results would be interesting?


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
      Quick question: If the sample size is so large, is there a reason you aren't using the Chi-square test? My understanding is Fisher's Exact Test may be preferred when the sample size isn't large enough to reasonably support the large-sample approximation for the Chi-square test. Since you have a large sample, why not take the easy road?
        ... is there a reason you aren't using the Chi-square test?

        I'm not trying to solve the sample problem. That was just an example I found on the web and used as a test.

        I set out to solve the problem of performing the FET using Perl. First I did it with Math::Pari, but which gives a result of 8.070604647867604097576877668E-7030 in 26ms, but I was unsure about the accuracy. It also imposes a binary dependancy.

        #! perl -slw use strict; use Benchmark::Timer; use List::Util qw[ sum reduce ]; use Math::Pari qw[ factorial ]; $a=$b; sub product{ reduce{ $a *= $b } 1, @_ } sub FishersExactTest { my @data = @_; return unless @data == 4; my @C = ( sum( @data[ 0, 2 ] ), sum( @data[ 1, 3 ] ) ); my @R = ( sum( @data[ 0, 1 ] ), sum( @data[ 2, 3 ] ) ); my $N = sum @C; my $dividend = product map{ factorial $_ } grep $_, @R, @C; my $divisor = product map{ factorial $_ } grep $_, $N, @data; return $dividend / $divisor; } my $T = new Benchmark::Timer; $T->start( '' ); print FishersExactTest 989, 9400, 43300, 2400;; $T->stop( '' ); $T->report; __END__ P:\test>MP-FET.pl 8.070604647867604097576877668E-7030 1 trial of _default ( 25.852ms total), 25.852ms/trial

        So, then I coded it using Math::BigFloat

        #! perl -slw use strict; use Benchmark::Timer; use List::Util qw[ reduce ]; use Math::BigFloat; $a=$b; sub product{ reduce{ $a *= $b } 1, @_ } sub sum{ reduce{ $a += $b } 0, @_ } sub FishersExactTest { my @data = map{ Math::BigFloat->new( $_ ) } @_; return unless @data == 4; my @C = ( sum( @data[ 0, 2 ] ), sum( @data[ 1, 3 ] ) ); my @R = ( sum( @data[ 0, 1 ] ), sum( @data[ 2, 3 ] ) ); my $N = sum @C; my $dividend = product map{ $_->bfac } grep $_, @R, @C; my $divisor = product map{ $_->bfac } grep $_, $N, @data; return $dividend / $divisor; } my $T = new Benchmark::Timer; $T->start( '' ); print FishersExactTest 989, 9400, 43300, 2400;; $T->stop( '' ); $T->report;

        But that ran for 20 minutes without producing any output before I killed it (I've set it running again now, and my machines fan has been thrashing at full speed for the last 25 minutes).

        Whilst I was waiting for the BigFloat version, I coded this version which attempts to reduce the size of the problem by eliminating (exactly common) factors:

        sub FishersExactTest2 { my @data = @_; return unless @data == 4; my @C = ( sum( @data[ 0, 2 ] ), sum( @data[ 1, 3 ] ) ); my @R = ( sum( @data[ 0, 1 ] ), sum( @data[ 2, 3 ] ) ); my $N = sum @C; my %dividends; $dividends{ $_ }++ for map{ factors $_ } grep $_, @ +R, @C; my %divisors; $divisors { $_ }++ for map{ factors $_ } grep $_, $ +N, @data; for my $i ( keys %divisors ) { if( exists $dividends{ $i } ) { $divisors{ $i }--, $dividends{ $i }-- while $divisors{ $i } and $dividends{ $i }; delete $divisors { $i } unless $divisors { $i }; delete $dividends{ $i } unless $dividends{ $i }; } } my $dividend = product( map{ ( $_ ) x $dividends{ $_ } } keys %div +idends ); my $divisor = product( map{ ( $_ ) x $divisors { $_ } } keys %div +isors ); return $dividend / $divisor; }

        This works well for values smallish values, but cannot handle the example I gave above (NV overflow).

        It was then I started thinking about how to eliminate more factors from the equation so as to reduce the size of the intermediate terms, and posted my SoPW. I think that hv's solution of expanding all terms to their prime factorizations before performing the cancelling out will be a winner--but I haven't finished coding that yet.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.

        As I understand the FET, it is important that the probabilities add up to 1.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
      BrowserUk,
      Once you have a division math problem of factorials that has been reduced by some factoring method (GCD or prime), it can be reduced even further by substraction.
      47! * 1091! ----------- 55! * 1001! (1002 .. 1091) -------------- (48 .. 55)
      This is only a savings when the factorial is being calculated by multiplying all the terms and not by some other approximation method.

      Cheers - L~R