Wiggins has asked for the wisdom of the Perl Monks concerning the following question:

Today I am asking guidance on a quest. The guidance has to do with efficiency, flexibility and/or speed. It concerns searching for matches of multiple Regexps in a large document. This is not homework, and I am looking for prior empirical experience. I have to assume this path has been trodden before, or it wouldn't be a path.

Approach 1 -- over full document, 1 regexp at a time.
Approach 2 -- dynamically build huge 'alternation'(re1|re2|re3|re4|...), over document once.
Approach 3 -- use while ( $doc =~ m/\G\z/gc) { ....} loop structure for incremental over document once.

As I write this, the thought that RE3 might be a subset of RE18; and I might want to know about both.

It is always better to have seen your target for yourself, rather than depend upon someone else's description.

Replies are listed 'Best First'.
Re: Regular Expression (Regex) Sieve
by ikegami (Patriarch) on Jul 14, 2009 at 17:33 UTC

    Based on your last paragraph, Approach 2 and 3 are nowhere near as good as Approach 1.

    You want overlapping matches of different patterns. Do you want overlapping matches of the same pattern?

    Do you need to know which pattern matched which result?

      I fully agree that the last constraint pretty much forces Approach #1. But it is just an after thought.

      Overlaps of different patterns could be more significant that within a pattern. When the patterns are supplied by different people, the "overlap" wouldn't be seen by either creator. While any single pattern should be structured by that creator to be work as they want when processes with a \g modifier.

      Each pattern will have an assigned weight. Each match will add that pattern's weight to the document's score. The final document score is what I am really after. So the answer is 'yes'.

      Thanks

      It is always better to have seen your target for yourself, rather than depend upon someone else's description.

Re: Regular Expression (Regex) Sieve
by dsheroh (Monsignor) on Jul 14, 2009 at 18:33 UTC
    If I'm understanding your question correctly (and I may well not be), you may want to take a look at Regexp::Assemble, which is roughly similar to your #2, but it will produce a more efficient regex (e.g., (re[1234]) instead of (re1|re2|re3|re4)) and has an option for identifying which of the original source regexes was matched.

    I've used Regexp::Assemble to good effect in relatively simple cases to locate all words present in a body of text from a list of a couple hundred target words, but have not had cause to use the 'report source regex' feature, so I can't comment on how well that works.

      Yes!! This is the sort of guidance I was looking for... "That path goes by a nice lake; the one over there leads into the abyss".

      The balance of simplicity and functionality must also be considered.

      It is always better to have seen your target for yourself, rather than depend upon someone else's description.

        My suggestion is to build a FSM with all the REs and then identify shift/reduce conflicts which will identify multiple matches. When you parse through the document you memorize the reduces and force a shift to potentially identify a longer match. This way you will only have to traverse the document once. If you have multiple matches of the same length you may have reduce/reduce conflicts, but they should be easy to identify since you can check for other reductions every time you reach one. Another alternative is to find nested REs. Run a regex for each RE over the set of remaining REs which should be O(n log n) with respect to the set of REs. Once you find a match you take out either the matched or the matching RE and go on. If you find more than one matching RE then repeat the process saving all the sets. Once you don't find any more matches you run each of the sets over the document. In the worst case it may be more expensive than running each regex separately. In the best case, you will run through the document only once, but you'll pay a little to do regex over the REs.
      Unless I'm missing something, R::A won't satisfy the requirement in the last para of the OP. That's why I didn't mention it.
Re: Regular Expression (Regex) Sieve
by SuicideJunkie (Vicar) on Jul 14, 2009 at 17:35 UTC

    If you want to know the results of each regex match (3 vs 18 as mentioned), then you must run them each in turn and record the results. Thusly, approach 2 is right out.

    If your regex to match against are multiline matches, then you're getting down to just a loop over the list of regexes and matching against the whole file each time.


    The best thing to do is benchmark the various options (while also ensuring that they return the correct results). In practice, I expect that the quality of the regexes input into your program will make all the difference in performance.

    For example, during a set of straightforward substitutions against a huge file that I did a few weeks ago, it spent about 5 seconds to read the input and 5 seconds to write the results back to disk. And it took a total of 10 seconds to run.

    On the other hand, if your input regexes are complicated messes with lots of backtracking and exponential time cost, you're hosed no matter what you do.

Re: Regular Expression (Regex) Sieve
by biohisham (Priest) on Jul 14, 2009 at 17:52 UTC
    how do you think using memory-free parentheses and lookahead\lookbehind assertions? they can contribute to efficiency since they are not memory based, of course you can use these in conditional contexts of matching. Just thought of suggesting something in addition.
    Excellence is an Endeavor of Persistence. Chance Favors a Prepared Mind