in reply to Re^3: Benchmark.pm: Does subroutine testing order bias results?
in thread Benchmark.pm: Does subroutine testing order bias results?
For the purpose of solving the problem I faced when I initiated this thread -- determining if an upgrade to one of my modules improves its performance in toto -- I think I'll KISS and use something like the script I posted in response to simonm above.
But could you post some code that illustrates your approach of running cmpthese() or timethese() twice in a run, the first time for getting memory allocated and the second time for results? Thank you very much.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^5: Benchmark.pm: Does subroutine testing order bias results?
by BrowserUk (Patriarch) on Jul 18, 2004 at 17:40 UTC | |
Sure. As you can see, it's not the lexically first test that gets the biased. It's the first iteration of that test. Which explains why the bias is more pronounced the less runs you do. By running all the tests once and discarding the results, you even up the playing field and the seconds cmpthese shows a much better distribution. You should also consider shutting down as much else that is running on your box for the duration of the tests. For example, if my dial connection times out during a test, a high priority thread runs for the duration of the reconnect. That can completely skew the results. Even using the mouse to pop up the task manager will have some effect. But if this is enough to obscure the gains you have made, it probably means that they are so small as to be subject to random variation anyway.
| [reply] [d/l] |
by jkeenan1 (Deacon) on Jul 19, 2004 at 21:30 UTC | |
The results were exactly as you predicted. Here is a set of tests of cmpthese() which parallels the results I posted earlier from runs on Win2K and Darwin. I will now try to adapt this approach to my original problem. Thanks for taking the time to look at this.
System Info: same as in previous posting | [reply] [d/l] [select] |