in reply to Re^4: Shuffling CODONS
in thread Shuffling CODONS
I'm far from an expert on stats, but I don't believe that Chi2 is the right test for the kind of samples this produces; and I cannot see any reference to Yates correction in the module.
Me neither, far from expert. Beyond chi-squared there are tests for finding a pattern in the shuffled data (I once tried to zip a file and count its compression ration achieved as proportional to how random the sequence is), and monte-carlo-splitting the shuffled array in groups and trying to see if the group's average approaches the total array's average.
I will report back chi-squared results using R
That despite the use of a completely bogus rand() function, a Fisher-Yates shuffle would still operate; and produce results:
That all possible shuffles of the data were being produced. I chose to shuffle 4 values because the 24 possible results fit on a screen and are simple to verify manually.
That they were produced with (approximately) the same frequency. Ie. The number of times each possible shuffle was produced were approximately equal and approximately 1/24th of the total runs.
In that respect, it served its purpose.
But, if you are going to formally test a shuffle, using only 4 value arrays and 1e6 iterations probably isn't the ideal scenario to test.
ok
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^6: Last comment (and nail in the coffin of) of S::CS :)
by BrowserUk (Patriarch) on Jun 10, 2018 at 13:54 UTC | |
Just for fun I made a copy of the chisquare() function in S::CS, that returns the number it calculates, rather than its string assessment of what that value means:
I also had to disable the "Malformed data" tests, as they reject floating point numbers!) Then I wrote a script that repeated the test a hundred times on the F-Y shuffle using MT rand(), and gathered the numerical values it produces into an array:
And finally, ran the original chisquare() function on its own output for assessment:
Finally, I think it got something right! Its own output is truly random :) (Sorry. I was bored :) ) With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
In the absence of evidence, opinion is indistinguishable from prejudice.
Suck that fhit
| [reply] [d/l] [select] |
by bliako (Abbot) on Jun 10, 2018 at 17:59 UTC | |
Fine. See also calculate Chi-Square test Back to business. I tried delegating the chi-squared test to R using:
I have also modified your chisquaredVal() to return the statistic as it does, along with an indicative p-value.
Then, I modified your script for doing a chi-square test using R as well. These are some results:
once more:
once more:
My verdict Statistics::ChiSquared can be improved by also printing out a p-value (we accept bias only if the chi-square statistic is large and the p-value is below our confidence interval, say 0.05). However, its calculated statistic is more or less the same as with R's. And after my modification to return a p-value, its p-value is comparable with R's. See:
Exact same statistic (x2), comparable p-values. So, that tells me that Fisher-Yates shuffle, for this particular problem, is susceptible to RNG's sequence: different runs (same array, different seed) produce sometimes sane and sometimes bogus(biased) results. Anything but bad-rand (e.g. sub { rand() < 0.5 ? 0 : rand( $_[0] ) } should give a 0,5,10% chance of biased output. Bad-rand gives 50% chance more-or-less. However, there is not much difference between perl's rand() and MersenneT (for this particular use-case). That concludes my part in this long tangent and also settles that TODO of mine which so much bugged you :) and for good reason and good results and good fun I say. Final word of warning: this investigation is for only this particular problem BrowserUK came up with. Other use cases may be different. The final script to do this (NOTE: you need to change first 2 'my' to 'our' in Statistics/ChiSquare.pm): Read more... (9 kB)
| [reply] [d/l] [select] |
by BrowserUk (Patriarch) on Jun 10, 2018 at 20:53 UTC | |
However, its calculated statistic is more or less the same as with R's. And after my modification to return a p-value, its p-value is comparable with R's. Hm. If that is the case(*) then given the complete instability of S::CS' verdict on F-Y with MT over successive runs on the same dataset and the same number of iterations, then I'd have to conclude that Chi2 isn't suitable for testing this algorithm and/or data. It would be interesting to see how consistent R's results when running the same data/iterations. If that also produced unstable results, that would be the proof it's the wrong test. I might look into trying to apply Fisher's Exact test to the problem tomorrow and see what that produces for F-Y with MT. I also have an idea about generating artificial datasets and feeding them to S::CS to determine the how big/small the differences are that cause it to switch from <1%, to 5% to 50% to >75% etc.; and then try to reason what affect those changes would have upon the 3 scenarios you postulated in your first post. Oh, and thank you for providing some interesting diversion for a boring Sunday:) *I'm not at all convinced that these two set of values are "more or less the same":
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
In the absence of evidence, opinion is indistinguishable from prejudice.
Suck that fhit
| [reply] [d/l] |
by bliako (Abbot) on Jun 10, 2018 at 21:35 UTC | |
by BrowserUk (Patriarch) on Jun 12, 2018 at 13:07 UTC | |
| |
|
Re^6: Shuffling CODONS
by BrowserUk (Patriarch) on Jun 10, 2018 at 12:46 UTC | |
I will report back chi-squared results using R It will be interesting to see the results from a known good source. Because I think that S::CS is (fatally) flawed. To get some feel for the accuracy of the test it performs, I decided to run it on the shuffle using the known good MT PRNG and a small dataset (1..4) a good number of times to see how consistent the results S::CS were; and the answer is not just "not very", but actually just "not":
79 more utterly inconsistent results: <Reveal this spoiler or all in this thread>
Given this is a known good algorithm using a known good PRNG, all in all, and as I said earlier, I think that is as good a definition of random as I've seen a module produce as its results. With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
In the absence of evidence, opinion is indistinguishable from prejudice.
Suck that fhit
| [reply] [d/l] [select] |