in reply to Re: Regex failure interpretation
in thread Regex failure interpretation

In my experience, differences of 2% when benchmarking are more likely to be noise than anything else.

I notice that the speed matches the order that the tests were run (first slowest, last fastest). Rename the cases (so that they sort in a different order and so get run in a different order) and you might find a different 'winner'.

For such tiny differences, just running it again could easily give you a different winner.

Certainly, running on different platforms is unlikely to always produce the same winner even when the difference is more like 5% or even 15%.

In future, you might want to have the benchmark run each test twice so you can get a feel for how much of one type of noise you have. For example, for each test case, foo, you give Benchmark a_foo and b_foo that point to the same code.

Also, your test strings are even shorter than the expected inputs. That right there is often enough to make a benchmark find nothing. And you have a loop in your test code. Processing the loop could be taking more time than running regexes against your tiny strings.

- tye        

  • Comment on Re^2: Regex failure interpretation (noise)