note
BrowserUk
<blockquote><i></i></blockquote>
<p>With that insight, I removed all the factoring and cancelling code and came up with this version:
<code>
#! perl -slw
use strict;
use Benchmark::Timer; my $T = new Benchmark::Timer;
use List::Util qw[ sum reduce max ]; our( $a, $b );
sub factors{ 2 .. $_[ 0 ] }
sub normalise {
my( $s, $n ) = @{+shift };
while( 1 ) {
if( $n > 1.0 ) { $n /= 10; $s++; redo; }
elsif( $n < 0.1 ) { $n *= 10; $s--; redo; }
else { last }
}
return [ $s, $n ];
}
sub sProduct{
@{
reduce{
$a->[ 0 ] += $b->[ 0 ];
$a->[ 1 ] *= $b->[ 1 ];
normalise( $a );
} map{ [ 0, $_ ] } 1, @_
};
}
sub FET4 {
my @data = @_;
return unless @data == 4;
my @C = ( sum( @data[ 0, 2 ] ), sum( @data[ 1, 3 ] ) );
my @R = ( sum( @data[ 0, 1 ] ), sum( @data[ 2, 3 ] ) );
my $N = sum @C;
my( $dScale, $d ) = sProduct map{ factors $_ } grep $_, @R, @C;
my( $sScale, $s ) = sProduct map{ factors $_ } grep $_, $N, @data;
return ( $d / $s, $dScale - $sScale );
}
die "Bad args @ARGV" unless @ARGV == 4;
print "[@ARGV]";
$T->start('');
printf "%.17fe%d\n", FET4 @ARGV;
$T->stop('');
$T->report;
<STDIN>;
exit;
__END__
P:\test>fet4 989 9400 43300 2400
[989 9400 43300 2400]
0.80706046478686522e-7029
1 trial of _default ( 2.851s total), 2.851s/trial
P:\test>FET4 5 0 1 4
[5 0 1 4]
2.38095238095238090e-2
1 trial of _default ( 564us total), 564us/trial
</code>
<p>Which, if you can live with the lower accuracy, for the (5 0 1 4) and (989 9400 43300 2400) datasets, compares favourably with my (rough) timings of [tmoertel]'s compiled Haskell code, the Math::Pari version and blows the M::BF version into dust.
<p>Where it gets really interesting is when you look at bigger numbers. I tried it with (1e5 2e5 2e5 1e5) and it took 81 seconds.
<code>
P:\test>FET4 1e5 2e5 2e5 1e5
[1e5 2e5 2e5 1e5]
1.32499459649722560e-14760
1 trial of _default ( 81.148s total), 81.148s/trial
</code>
<p>Math::Pari can't handle numbers this big, and the Haskell version ran for roughly an hour before I abandoned it.
<p>It makes me think that maybe a Math::Big library based around the idea of the scaled math I use in <code>sProduct()</code> might be useful for large number work that doesn't require absolute precision?
<div class="pmsig"><div class="pmsig-171588">
<hr />
<font size=1 >
<div>Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.</div>
<div>Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?</div>
<div>"Science is about questioning the status quo. Questioning authority". </div>
<div>The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.</div>
</font>
</div></div>
481987
483367