in reply to Re: Benchmarking regex alternation
in thread Benchmarking regex alternation

the regex $str =~ /foo/ devolves to something pretty much like Index $str, 'foo'

Yes exactly. Itll be a little slower than a index but only becuase there is a longer codepath from the regex-opcode to the actual FBM search than from the index-opcode.

The surprise is how fast using a character set with a common prefix match is. That result also goes a long way to validating the benchmark.

And it illustrates a new optimisation in 5.10:

Rate using_alt_match using_or_nomatch using_alt_ +nomatch using_or_match using_alt_match 412710/s -- -29% + -30% -36% using_or_nomatch 580306/s 41% -- + -1% -9% using_alt_nomatch 585631/s 42% 1% + -- -9% using_or_match 640118/s 55% 10% + 9% -- 1631660

Basically 5.10 is smart enough to convert /baz|bar/ into something close to /ba[rz]/. Its not quite as fast when written as an alternation due to current implmentation details, but it is very close.

---
$world=~s/war/peace/g

Replies are listed 'Best First'.
Re^3: Benchmarking regex alternation
by sgt (Deacon) on Jan 30, 2007 at 17:27 UTC

    I wonder about /others/bar|baz|others/ if there are a lot of alternations, the probability of common prefixes decreases with the number of alternations or does it try to find common prefixes for close neighbours ?

    In the last 2-3 months on p5p was mentioned a few times the non-linearity of the regexp engine. The problem wrt to the internal utf8 representation is that at a given byte position you cannot really say what is the closest "character" position without knowing a previous correct one (the start for example is assumed ok). Is that the only cause of non-linearity? could you use markers to always keep correct "last" char. positions (cases with lots of backtracking could benefit of this, no? or is all that taken care of already in some smart way)

    Actually the "keeping all markers" trick reminds me for some reason of packrat parsing. Have you looked at packrat parsing in the context of regex? (wikipedia has links on the subject, and to the original haskell implementation of the algorithm) the algorithm seems to limit worse time behaviour of pathological NFA (non-posix) regexes to something roughly linear in the regex size

    thanks --stephan

    update: corrected typo if to is