Note that it is not the case that a 300% speed increase on a micro operation resulted in an over-all speed-up of 31100%. This demonstration is impressive but, to be clear, is also rather tangential to what I was actually talking about.
This is a decisive refutation, but of a straw-man argument that I wasn't actually making. It resoundingly defends the idea that optimization can sometimes be quite significant. But I was not attacking that idea (and I never have). (Though, I certainly allow that BrowserUk might have sincerely believed I was doing so.)
It might be interesting to reuse the code to test my actual prediction.
I was a bit surprised that I didn't see anybody benchmark the original code (but perhaps I just missed it). Significant performance degradation on this type of stuff is usually due to regexes making entire copies of very long strings (especially if applying a large number of regexes per string) or due to pathological backtracking. The latter seems unlikely after a quick scan or the original code. So I'll take a wild guess (without having even glanced at the code hidden above) and predict that the disconnect has much much more to do with the size of strings being used and rather little to do with the original implementation actually being even close to 31100% slower on strings similar to those actually specified in the root node.
It should be fairly simple to show how the gulf between my "<20%" prediction and the above "31100%" demonstration is distributed over the various disconnects:
Perhaps something informative, even useful, could result. It might even provide practical lessons for some on a few differences between micro optimizations and useful optimizations.
- tye
In reply to Re^8: Speed Improvement (Is 311 times faster: "insignificant"?)
by tye
in thread Speed Improvement
by Nar
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |