in reply to Re: RegExp breaks in Perl 5.10
in thread RegExp breaks in Perl 5.10

Out of more than 28K words, only 15 fail... I wouldn't think this is related to encoding.

Yes, this is definitely odd, in particular as the ó character, which was causing problems in your case, was not one of the problem chars in Slaven Rezic's test code (which I'm re-posting here for easy reference):

for my $chr (160 .. 255) { my $chr_byte = chr($chr); my $chr_utf8 = chr($chr); utf8::upgrade($chr_utf8); my $rx = qr{$chr_byte|X}i; print $chr . " " . ($chr_utf8 =~ $rx ? "ok" : "not ok") . "\n"; }

Here, it was mainly uppercase letters where the match failed.

Note that the matching is done case-insensitively (which you don't do in your module).  However, when you remove the 'i' from the qr{}, everything seems to work fine... So, I played around with this a bit more, and in fact it turned out the bug is highly dependent on context (which could explain why most of your regexes kept working).

For example, this modified test code still works fine

for my $chr (160 .. 255) { my $chr_byte = chr($chr); my $chr_utf8 = chr($chr); utf8::upgrade($chr_utf8); my $rx = qr{uci$chr_byte|uci}; my $s = "uci$chr_utf8"; print $chr . " " . ($s =~ $rx ? "ok" : "not ok") . "\n"; }

but if you add another character to the second alternative in the regex, e.g.

... my $rx = qr{uci$chr_byte|uci_}; ...

(the underscore shown here can be any char, it seems) the match suddenly fails in all cases tested (160..255) — but only if the leading 3 chars of the alternative are "uci". Actually, there are a number of other weird cases, but I think I don't have to show them all here. :)

As already mentioned in that thread, the problem seems to be related to the new trie code, because if you set ${^RE_TRIE_MAXBUF} = -1; all weirdness disappears.

Replies are listed 'Best First'.
Re^3: RegExp breaks in Perl 5.10
by jfraire (Beadle) on Mar 07, 2008 at 07:06 UTC

    I followed this thread advice (as I said above) and uploaded the module converted into UTF-8. The good news is that so far, all reports in both 5.10.0 and 5.8.8 have passed.

    Bad news is that if you repeat your test but backwards, $latin =~ /utf8/, it also fails:

    for my $chr (160 .. 255) { my $chr_byte = chr($chr); my $chr_utf8 = chr($chr); utf8::upgrade($chr_utf8); my $rx = qr{uci$chr_utf8|uci_}; my $s = "uci$chr_byte"; print $chr . " " . ($s =~ $rx ? "ok" : "not ok") . "\n"; }

    Now that the module is UTF-8, I copied the test suite list of words into latin 1. As suggested by the test above, the new test suite fails. It fails for the same 15 words.

    So, is ${^RE_TRIE_MAXBUF} = -1; the most general work-around? What implications does it have? What other options do I have?

    Thank you for your kind help.

      So, is ${^RE_TRIE_MAXBUF} = -1; the most general work-around? What implications does it have? What other options do I have?

      I think your best bet is to do what you've already done :) i.e. use UTF-8 everywhere and declare it as such, both in the script (use utf8;) and for the word list from the file.  OTOH, declaring explicitly which legacy encoding you're using should work just as well (but then you'd miss that warm and fuzzy feeling of utilising state-of-the-art technologies :)

      What apparently needs to be avoided due to the nature of the current bug in 5.10.0 is to rely on Perl auto-upgrading the strings at match time — which would have to occur if either side hasn't yet been upgraded. At least that's my conclusion.

      With use encoding "iso-8859-1"; the literal strings in the script will be upgraded to character semantics at an earlier point in time, so the problem with the auto-upgrade becomes irrelevant.  Moreover, in general it doesn't do any harm to make things explicit, i.e. tell Perl where it's supposed to assume which encodings (as a side effect, you'll have things documented without any further ado).

      ${^RE_TRIE_MAXBUF} = -1 simply disables the new trie optimizations, so essentially you'll then have what you always had before. That might be ok, too, but I'd probably only make use of it as a last resort — which should not be necessary, as there are better solutions like outlined above. Not only is it ugly having to mess with internal settings on a regular basis, it's also that if everyone would overcautiously default to disabling the new optimizations now that some issue has become known, it'd only take longer to find and weed out possibly remaining issues...  (Sure, you might have a different take on this if stability is a major concern, but then you probably wouldn't use the newest and shiniest stuff anyway.)

        almut,

        It is indeed a warm and fuzzy feeling :-)

        I agree with you on that I should not disable the optimizations as it is certainly against progress.

        This problem, being as dependent on context as it is, went away by simply rewriting the regexp as follows:

        $R2 =~ /uci(ones|ón)$/

        And it now works both for latin-1 and UTF-8 words. I will certainly include two copies of the word list in the next release, one in each encoding.

        A strange world, this is.

        Thanks again for your kind help.

        Julio