http://qs1969.pair.com?node_id=1084079


in reply to length() miscounting UTF8 characters?

Using the length function to count unicode characters is a bug waiting to happen. It works with your dataset and will work with many others, but may fail on certain languages or with complex data. Much more robust is to use unicode properties.

#!/usr/bin/env perl use warnings; use v5.14; binmode STDOUT, 'utf8'; binmode DATA, 'encoding(utf-8)'; while (<DATA>) { chomp; print $_, ': '; s/[A-Za-z]//g; my $alphacount = () = /\p{Alpha}/g; say "non-[A-Za-z] symbols <$_> contain $alphacount alphabetic char +acters"; } __DATA__ æðaber æðahnútur æðakölkun æðardúnn æðarfugl æðarkolla æðarkóngur æðarvarp æðruorð
My standard practice has become to use utf8::all to handle all streams and save me from specifying each stream encoding separately. There's probably some pitfalls in using it but so far I haven't encountered any.

Replies are listed 'Best First'.
Re^2: length() miscounting UTF8 characters?
by AppleFritter (Vicar) on Apr 28, 2014 at 09:42 UTC

    Thank you, that's very useful as well. In what sense is using length to count Unicode characters a bug waiting to happen, though? Now I'll admit I've just learned first hand that this is indeed dangerous territory to tread, but the perldoc entry for length (which I checked beforehand to make sure it wouldn't count bytes -- hence my confusion) says:

    Like all Perl character operations, length() normally deals in logical characters, not physical bytes. For how many bytes a string encoded as UTF-8 would take up, use length(Encode::encode_utf8(EXPR)) (you'll have to use Encode first).

    So if used right, it should work, shouldn't it? Do you have any specific languages or complex data in mind with which it might fail?

      The problems with length are not around bytes vs. characters, but that length counts code points. Many logical characters are composed from multiple code points, and some logical characters have multiple representations in Unicode.

      For example, consider “á” (U+00E1 latin small letter a with acute). The same logical character could be composed of two codepoints: “á:” (U+0061 latin small letter a, U+0301 combining acute accent). So while they produce the same visual output (the same grapheme), the strings containing these would have different lengths.

      So when dealing with Unicode text, it's important to think about which length you need: byte count, codepoint count, or count of graphemes (visual characters), or the actual width (there are various characters that are not one column wide – the tab, unprintable characters, and double-width characters e.g. from far-eastern scripts come to mind). The script in a previous reply takes these different counts into account.

      The issue of multiple encodings for one logical character should also be kept in mind when comparing strings (testing for equality, matching, …). In general, you should normalize the input (usually the fully composed form for output, and the fully decomposed form for internal use) before trying to determine whether two strings match.

        Right, thanks again! I hadn't thought about codepoints vs. characters, but I'll keep this in mind; combining accents and other diacritics in particular I might well encounter.

        Searching CPAN shows that there's a module for this, Unicode::Normalize, which I'll look into.

        The problems with length are not around bytes vs. characters, but that length counts code points. Many logical characters are composed from multiple code points

        1. What you call "logical character" is an "extended grapheme cluster", which I abbreviate to "grapheme".

        2. length doesn't count code points. length always counts characters (string elements). It has no idea what those characters are as that information is neither available nor needed. They are just 32-bit or 64-bit numbers to length. They could be bytes. They could be Unicode code points. But they aren't going to be graphemes (visual character) as there is no existing system to encode graphemes in a single number.

      the perldoc entry for length (which I checked beforehand to make sure it wouldn't count bytes -- hence my confusion)
      It "normally deals in logical characters", but its logic doesn't cover all the intricacies of unicode.

      Do you have any specific languages or complex data in mind with which it might fail?
      Yes, Thai language is the main one I'm involved with. The modified script below shows that length counts diacriticals in Thai, which may or may not be what is wanted, and is inconsistent with the results for Latin diacriticals in your dataset, which length isn't counting separately. I'm using pre tags so that the Thai will display correctly and shortened lines to facilitate copy/paste.
      #!/usr/bin/env perl
      use warnings;
      use v5.14;
      use Unicode::Normalize qw/NFD/;
      binmode STDOUT, 'utf8';
      binmode DATA, 'encoding(utf-8)';
       
      while (<DATA>) {
          chomp;
          print $_, ': ';
          s/[A-Za-z]//g;
          my $alphacount = () = /\p{Alpha}/g;
          say "non-(A-Za-z) symbols <$_>", 
              " contain $alphacount", 
              " alphabetic characters and ",
              getdia($_), " diacritical chars.";
          say "length() thinks there are ", 
              length, " characters\n";
      }
      
      sub getdia {
          my $normalized = NFD($_[0]);
          my $diacount = () = 
              $normalized =~ /\p{Dia}/g;
          return $diacount;
      }
      
      __DATA__
      เป็น
      ผู้หญิง
      เมื่อวันก่อน
      æðaber
      æðahnútur
      æðakölkun
      

        Intricacies - that's putting it mildly! Right now it feels like the more I learn about Unicode, the less I know. ;)

        I'll make a mental note to avoid length when dealing with Unicode data, and to normalize strings before working with them.

      In what sense is using length to count Unicode characters a bug waiting to happen, though?

      It's a "bug waiting to happen" when you try to make meaningful inferences about Unicode text by computing the size in bytes of the text in a specific Unicode character encoding scheme (e.g., UTF-8). This is what another monk was hinting at doing earlier in this thread when he or she suggested "dividing by two." That's a bug waiting to happen.

      In general, when dealing with Unicode text, you're much more likely to need to know the numbers of code points in a string, or the numbers of graphemes in it ("extended grapheme clusters" in Unicode standardese). However, there are situations in which you might need to know the length in bytes of a Unicode string in some specific encoding. An example of this is needing to store character data in a database column with a capacity measured in bytes rather than in Unicode code points or graphemes. If you have a character data type column with a capacity of, say, 255 bytes, then the number of UTF-8 encoded Chinese characters you can insert into the column is likely going to be a lot fewer than the number of UTF-8 encoded Latin characters you can insert into the same column. In this case, knowing the size of the string in code points or graphemes won't help you answer the question "Will it fit?" You need the size in bytes.

        OK, thanks again for the detailed reply! That really clears things up.
        Thanks for explaining that /2 issue! I had no idea.

        This thread has been most enlightening.

        ...the majority is always wrong, and always the last to know about it...
        Insanity: Doing the same thing over and over again and expecting different results...
Re^2: length() miscounting UTF8 characters?
by moritz (Cardinal) on Apr 28, 2014 at 19:27 UTC
    Using the length function to count unicode characters is a bug waiting to happen.

    Well, all perl builtins work at the codepoint level, including length. Depending on your definition of "character", that might or might not be what the OP wants.

    I've attempted to implement "extended grapheme cluster" (that is, any base char + modifiers is considered a "character") logic in Perl6::Str. Feedback very welcome :-).

      Yes, "extended grapheme clusters" are what I'm apparently interested and what I'd ordinarily call "characters", rather than codepoints.

      I've not looked at Perl 6 yet, but being able to work with Unicode data from a high-level perspective, without caring too much about implementation details such as the various representation layers (the encoding layer that take bytes to codepoints, and then the next one that takes codepoints to "extended grapheme clusters") would be a huge boon for many, including me.

        You seem to be thinking that Moritz is suggesting you use or look in to Perl 6. I'm pretty sure he is not.

        Yes, Perl 6 does aim to enable working from the high-level perspective you describe.

        No, it's not worth you looking in to Perl 6 if you just want to get stuff done and don't care to have fun figuring out how it works and contributing to the Perl 6 effort by fixing and working around bugs and speed problems etc.

        Moritz mentioned Perl6::Str. This is a perl 5 module he wrote that must be used by perl 5 scripts. It is one of several perl (5) modules written by perl 5 programmers who chose to use the Perl6 namespace in CPAN to reflect the module being somehow related to Perl 6.

        To quote the Perl6::Str module description:

        Perl 5 offers string manipulation at the byte level (for non-upgraded strings) and at the codepoint level (for decoded strings). However it fails to provide string manipulation at the grapheme level, that is it has no easy way of treating a sequence of codepoints, in which all but the first are combining characters (like accents, for example) as one character.

        Perl6::Str tries to solve this problem by introducing a string object with an API similar to that of Perl 6 (as far as possible), and emulating common operations such as substr, chomp and chop at the grapheme level. It also introduces builtin string methods found in Perl 6 such as samecase.

        In summary, aiui: for production grade grapheme level handling, it's best to rely on perl 5, accepting all the intricacies and complications that inevitably arise; for some Perl 6 like features in perl 5 that attempt to hide some of that complexity (in trade for some loss of some magic and speed) you can consider Perl6::Str; and for the ideal simpler Unicode handling scenario you described, there's no better prospect than Perl 6 but it's not yet ready for most users and use cases.

        Representing the written languages of the world on computers is complex. The Unicode Standard is complex. Programming Unicode is complex. There's a limit to how much of this complexity can be hidden from computer programmers.

        If you want to understand Unicode better, and how to think correctly about programming Unicode, read Tom Christiansen's excellent Stack Overflow post here. If, after reading his well-known post, you find you need more of Tom's Perl Unicode wisdom, then come back to PerlMonks and read what tchrist has written about the topic here.

      Well, all perl builtins work at the codepoint level, including length. Depending on your definition of "character", that might or might not be what the OP wants.
      Sure, I'm just saying that bugs or unexpected results can occur if care is not taken. As amon pointed out, the same visual representation of a character with a diacritical might have either one or two codepoints.
      #!/usr/bin/env perl use v5.14; use warnings; use utf8; binmode STDOUT, 'utf8'; my $o_umlaut1 = "\x{F6}"; my $o_umlaut2 = "\x{6F}\x{308}"; my $string1 = "æð" . $o_umlaut1; my $string2 = "æð" . $o_umlaut2; say "length of $string1 is ", length($string1); say "length of $string2 is ", length($string2);
      __OUTPUT__
      length of æðö is 3
      length of æðö is 4
      

      I'll play around with your module. Thai is somewhat unique in that the first combining character may be another alphabetic character, so counting extended graphemes does not necessarily give the correct count of alphabetic characters.