I am having problems when the window happens to split the string you are searching for in two, so can never be found - how can this be fixed?
The algorithm uses a sliding window, and matches strings that fall within that (sliding) window. If you're trying to match a string that doesn't fit in the window, make the window larger. Or if you think you've found a problem, post a test case that demonstrates the failure.
| [reply] [Watch: Dir/Any] |
Yep, it is very fast, but why is that better than this:
open(F, "<", $file) or die "$file: $!";
binmode(F);
undef $/; # switch off end-of-line separating
# read file in large chunks
while (<F>) {
while ( m/$re/oigsm ) {
print "$1\n";
}
}
$/ = '\n'; # switch back to line mode
close(F);
?
Thanks,
Tamas
| [reply] [Watch: Dir/Any] [d/l] |
but why is that better than this: ...
My fragment doesn't assume that the huge file will fit in memory, and it matches across read boundaries. Your approach sets up for a single-read slurp.
| [reply] [Watch: Dir/Any] |
| [reply] [Watch: Dir/Any] [d/l] [select] |
Making the window large only limits the posibilities of having the string you are looking for cut in two, there is no real way to prevent this from happening unless you are looking for a fixed size string. In that case you could always keep that fixed size of the old window and append the new window to the old. This way if there was an intersection you have just undone it, make sure to move your position back accordingly.
| [reply] [Watch: Dir/Any] |