Provided my code has no bugs, see for yourself:
I've noticed persistent differences in the results of your code and mine. (Well, the versions of your code as included in ikegami's benchmarks.) They take the form of 2 to 3 subsequences that differ by a count of 1 or 2. I suspect that this is an easy bug to fix... but since it's your code, I figure you can debug and fix it. ;-)
That all said, I'd like to point out that 1) including the file reads in your benchmark obscures the issue. As long as the machine has plenty of memory (and maybe yours doesn't) you are contaminating your results with two different methods of reading the data. And 2) you could modify my algorithm and most of the others' to work with chunks as well if RAM really was an issue.
If you work your bugs out, I am interested in your method of iterating over the shorter strings for the smaller length matches though. I haven't looked at it closely yet, but I'd like to see how that scales.
-sauoq "My two cents aren't worth a dime.";
In reply to Re^2: Question about speeding a regexp count
by sauoq
in thread Question about speeding a regexp count
by Commander Salamander
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |