in reply to Re^7: Last comment (and nail in the coffin of) of S::CS :)
in thread Shuffling CODONS
However, its calculated statistic is more or less the same as with R's. And after my modification to return a p-value, its p-value is comparable with R's.
Hm. If that is the case(*) then given the complete instability of S::CS' verdict on F-Y with MT over successive runs on the same dataset and the same number of iterations, then I'd have to conclude that Chi2 isn't suitable for testing this algorithm and/or data.
It would be interesting to see how consistent R's results when running the same data/iterations. If that also produced unstable results, that would be the proof it's the wrong test.
I might look into trying to apply Fisher's Exact test to the problem tomorrow and see what that produces for F-Y with MT.
I also have an idea about generating artificial datasets and feeding them to S::CS to determine the how big/small the differences are that cause it to switch from <1%, to 5% to 50% to >75% etc.; and then try to reason what affect those changes would have upon the 3 scenarios you postulated in your first post.
Oh, and thank you for providing some interesting diversion for a boring Sunday:)
*I'm not at all convinced that these two set of values are "more or less the same":
(pvalues=,0.4869334,0.4961692,0.8251134,0.2657584,0.84692,0.1296479,0. +504212,0.9028209,0.8082847,0.2999607,0.154672,0.1660518,0.5143663,0.8 +120685,0.4452244,0.6561128,0.6123136,0.6994308,0.9302561,0.4757345) pvalues=,0.25,0.25,0.75,0.25,0.75,0.1,0.5,0.9,0.75,0.25,0.1,0.1,0.5,0. +75,0.25,0.5,0.5,0.5,0.9,0.25)
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^9: Last comment (and nail in the coffin of) of S::CS :)
by bliako (Abbot) on Jun 10, 2018 at 21:35 UTC | |
by BrowserUk (Patriarch) on Jun 12, 2018 at 13:07 UTC | |
by bliako (Abbot) on Jun 13, 2018 at 14:24 UTC |