Thanks for posting your filters. I guess what I mean is that apparently in real-life backtracking and related features aren't required. In fact, I find them a source of distraction and bugs in my regexes. For instance, if you want to parse C source code, you might say something like:
/((?:const\s+)*)\s+(\w+)\s+(\w+);/
Now suppose the C code has a bug, and it defines a variable like this:
const int;
Then your three regex captures look like:
$1==''
$2=='const'
$3==int
which clearly wasn't the intended result.
This happens because of backtracking for instance.
If you look at regex history, people had NFA/DFA machines from computer science theory, and they said a "language" is whatever passes through these machines. So that's what regexes matched. Nowadays, Perl has actually introduced things like non-backtracking constructs, namely '(?>'; essentially, acknowledging the "problem". However, I'm not quite sure why non-backtracking shouldn't be all you need in a "real-life" situation.
Thanks a lot for everyone's comments.
Reza. | [reply] |
Neither backtracking nor captures are really a problem needing to be fixed. Features like (?>) were added to help extend the sorts of things regular expressions can do.
To take your example, regular expressions are not the problem with why the wrong thing was matched. Your expression allowed that interpretation (or it would have with some minor changes). The problem is that you are using an expression that does not properly cover the case you claim to be looking for.
To take another example that I think shows why these features are more useful, let's match a US telephone number. A telephone number in the US can take many forms:
- 445-7890
- 445 7890
- 4457890
- 713 445-7890
- (713) 445-7890
- 713-445-7890
- 7134457890
- 713 445 7890
And that leaves out adding a 1 or 0 for long distance and extensions, which people often give as part of the number.
Matching this set of expressions requires optional characters which (if you are doing captures) requires backtracking. (Not really, but the implementation gets hairier if we discuss that part.)
So to match a phone number, we would need:
m{ (
(?: \( \d\d\d \) \s* ) | \d\d\d (?: -? | \s* ) ) ?
\d\d\d (?: - | \s* ) \d\d\d\d
)
}x;
Obviously, this appears somewhat complicated and there is quite a bit of possibility for confusion. In this case, however, the problem is not the regex, it's the fact that the phone number format is specified fairly sloppily.
In fact, the times that I have often found the features you are questioning most useful are when I'm dealing with real world data. Because unlike the stuff (insert pompous tone) I generate, the real world is messy and inconsistent.
One of the nastiest problems I ever tried to solve was to extract tables of information from text files generated by people at various companies. You have no idea how many weird variations that people can come up with that a person can interpret, but are almost unparseable by computer. Without many of these features, we would not have gotten as far as we did.
| [reply] [d/l] [select] |
| [reply] [d/l] [select] |
You see, in the old days, a regex only returned a boolean: match or not-match. Later, people got serious about actually capturing the matched sections, and they introduced backtracking (versus NFA machines) to get that sort of thing done.
But then, the actual _algorithm_ and internals of what's happening becomes crucial. You now care about more than just a theoretical boolean result: match or not-match.
When internally it matters what the engine is doing, backtracking makes it very hard and unnatural to predict and to think about. That's not how "real" humans actually think about their mother tongue. They don't seem to seriously "backtrack" in their brains while reading a book.
So, are there real-life examples of where that's needed?
Reza.
| [reply] |