in reply to Re^2: Alternations and anchors (trie optimization)
in thread Alternations and anchors

> I'm not sure why ((?:^(?:aaa|bbb|ccc))|(?:ddd|eee|ccc)) can be optimised but not (?:^aaa|ddd),

As I said:

Because "aaa" is a string of literal characters but "^aaa" isn't. "^" is a meta symbol for anchoring. The literal character is "\^", but that's not what you want.

If you need better explanation how trie works, please follow the links I provided.

The long answer

It certainly could be implemented for surrounding anchors too, just by using the workaround I showed.

But it's normally a bargain between code complexity and trade-off.

Perl's source is already suffering from covering too many edge cases which are only of interest for a small minority of users. And they shout the loudest when backwards compatibility is broken.

At the same time the code is getting increasingly complicated to maintain.

For instance: you could volunteer to implement a solution which creates multiple trie alterations surrounded by different anchors. (Not trivial to test)

Then someone with a commit bit has to decide if it's worth the resulting trouble to test and maintain that code in eternity.

In the end it's strategically easier to provide a CPAN module covering this edge case.

Usage would show if it's of wider interest or just pleasing you and a handful of others.

Cheers Rolf
(addicted to the Perl Programming Language :)
see Wikisyntax for the Monastery

  • Comment on Re^3: Alternations and anchors (trie optimization)

Replies are listed 'Best First'.
Re^4: Alternations and anchors (trie optimization)
by cavac (Prior) on Apr 03, 2025 at 11:44 UTC

    Usage would show if it's of wider interest or just pleasing you and a handful of others.

    Without knowing more about the OPs specific use case, i certainly don't know if this attempt at optimization is actually required or if we're dealing with an X/Y problem here.

    For example, if the input dataset is refered often, but hardly ever changes, caching the parsed results might decrease wait time orders of magnitude more in the long run than trying to optimize the parsing time. If new or changed data is also not used immediately, throw in a bit of pre-computing, and we're basically down to the time it requires to read a file from disk (and even then there are ways to optimize it, for example by pre-caching in RAM).

    PerlMonks XP is useless? Not anymore: XPD - Do more with your PerlMonks XP
    Also check out my sisters artwork and my weekly webcomics
      You are essentially saying that improving the regex engine is not necessary because you can preprocess any input in a database.

      It's like saying improving Perl's speed is nonsense, you can always use C or Assembler instead.

      Cheers Rolf
      (addicted to the Perl Programming Language :)
      see Wikisyntax for the Monastery

        You are essentially saying that improving the regex engine is not necessary

        That's not what I read. More like changing the regex engine may not be necessary for the OP's real world problem (whatever that might be). If someone were to "improve" the regex engine to make this particular operation faster to run or easier to code, what would be the penalties for every other use of the engine? A workaround might well be the better option.

        Generally, if an improvement can be made which neither breaks backwards compatibilty nor slows down any aspect of RE use nor introduces a forwards maintenance problem then I say go for it. Any other scenario should remain up for debate and decided on the relative merits/demerits.


        🦛

        No, not what i'm saying.

        What i'm saying, in case of question from the OP, is, that given their (assumed) limited time to come up with a workable solution, it is always helpful to know the full story. This way, we can see the bigger picture and maybe come up with a solution that gives an even better boost in performance.

        Optimizing the regex engine in general and the finding a better solution for the given code example is always a good thing. What i meant by my post is that regex matching might not have to be in the performance critical part of the overall system at all.

        To give you an example from my own software: I'm using IO::Compress::Gzip for HTTP compression ("Content-Encodig"). I have to use that for dynamic content on the fly, so getting the best performance is important. But by pre-compressing static assets during startup, i can reduce workload and wait time during page loads a lot.

        Another example: On some tables in my database, i can do searches over all columns, some of which have to be converted from one type to another or have some calculations done to them. No matter how you optimize your SQL statements, it's gonna be slow. By pre-calculating a single text-type search column on inserts, INSERT is gonna be a bit slower (in a background process), but i can use a full text search module on a single, indexed column (trading space for time, essentially) to speed up searches. It's still important to make the pre-calculation fast and efficient¹, but moving the main processing out of the time critical user interaction path reduces the time the user has to wait for results.²


        ¹ Processor cycles cost power, which costs money.

        ² Peoples time costs more money than a harddisk upgrade every couple of years that has to be done anyway.