http://qs1969.pair.com?node_id=483442

With that insight, I removed all the factoring and cancelling code and came up with this version:

```#! perl -slw
use strict;
use Benchmark::Timer; my \$T = new Benchmark::Timer;
use List::Util qw[ sum reduce max ]; our( \$a, \$b );

sub factors{ 2 .. \$_[ 0 ] }

sub normalise {
my( \$s, \$n ) = @{+shift };
while( 1 ) {
if(    \$n > 1.0 ) { \$n /= 10; \$s++; redo; }
elsif( \$n < 0.1 ) { \$n *= 10; \$s--; redo; }
else { last }
}
return [ \$s, \$n ];
}

sub sProduct{
@{
reduce{
\$a->[ 0 ] += \$b->[ 0 ];
\$a->[ 1 ] *= \$b->[ 1 ];
normalise( \$a );
} map{ [ 0, \$_ ] } 1, @_
};
}

sub FET4 {
my @data = @_;
return unless @data == 4;
my @C = ( sum( @data[ 0, 2 ] ), sum( @data[ 1, 3 ] ) );
my @R = ( sum( @data[ 0, 1 ] ), sum( @data[ 2, 3 ] ) );
my \$N =  sum @C;
my( \$dScale, \$d ) = sProduct map{ factors \$_ } grep \$_, @R, @C;
my( \$sScale, \$s ) = sProduct map{ factors \$_ } grep \$_, \$N, @data;
return ( \$d / \$s, \$dScale - \$sScale );
}

die "Bad args @ARGV" unless @ARGV == 4;
print "[@ARGV]";
\$T->start('');
printf "%.17fe%d\n", FET4 @ARGV;
\$T->stop('');
\$T->report;
<STDIN>;
exit;
__END__
P:\test>fet4 989 9400 43300 2400
[989 9400 43300 2400]
0.80706046478686522e-7029
1 trial  of _default (   2.851s total), 2.851s/trial

P:\test>FET4 5 0 1 4
[5 0 1 4]
2.38095238095238090e-2
1 trial  of _default (    564us total), 564us/trial

Which, if you can live with the lower accuracy, for the (5 0 1 4) and (989 9400 43300 2400) datasets, compares favourably with my (rough) timings of tmoertel's compiled Haskell code, the Math::Pari version and blows the M::BF version into dust.

Where it gets really interesting is when you look at bigger numbers. I tried it with (1e5 2e5 2e5 1e5) and it took 81 seconds.

```P:\test>FET4 1e5 2e5 2e5 1e5
[1e5 2e5 2e5 1e5]
1.32499459649722560e-14760
1 trial  of _default (  81.148s total), 81.148s/trial