in reply to strange collision when calculating p-values using Statistics::TTest

The p-values are very small (around 1.6e-12) so I wonder if this ins't some sort of precision issue.
It appears to be a limitation of the tprob function in Statistics::Distributions. The approximation isn't good enough for such small values.
  • Comment on Re: strange collision when calculating p-values using Statistics::TTest

Replies are listed 'Best First'.
Re^2: strange collision when calculating p-values using Statistics::TTest
by roadnottaken (Acolyte) on Jun 25, 2012 at 13:49 UTC
    Thanks for the comments. In case anyone else runs into a similar problem, I have found a workaround: I have not figured out what's going on with Statistics::TTest or Statistics::Distributions, but I have found another module that works correctly with these data. As I understand it, a one-way ANOVA on a single pair of distributions is equivalent to an Student's T-test. As such, I tried using Statistics::ANOVA and I had success. Using the definitions of %datasets from the above code, the following loop will calculate correct p-values (matching the values that Excel gives):
    foreach my $dataset (sort keys %datasets) { my $aov = Statistics::ANOVA->new(); $aov->load( "$dataset\1", \@{$datasets{$dataset}[0]} ); $aov->add( "$dataset\2", \@{$datasets{$dataset}[1]} ); $str = $aov->anova(independent => 1, parametric => 1, ordinal => 0 +); print $str->{_stat}->{p_value} . "\n"; }