Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Re^2: length() miscounting UTF8 characters?

by AppleFritter (Vicar)
on Apr 28, 2014 at 09:42 UTC ( [id://1084101]=note: print w/replies, xml ) Need Help??


in reply to Re: length() miscounting UTF8 characters?
in thread length() miscounting UTF8 characters?

Thank you, that's very useful as well. In what sense is using length to count Unicode characters a bug waiting to happen, though? Now I'll admit I've just learned first hand that this is indeed dangerous territory to tread, but the perldoc entry for length (which I checked beforehand to make sure it wouldn't count bytes -- hence my confusion) says:

Like all Perl character operations, length() normally deals in logical characters, not physical bytes. For how many bytes a string encoded as UTF-8 would take up, use length(Encode::encode_utf8(EXPR)) (you'll have to use Encode first).

So if used right, it should work, shouldn't it? Do you have any specific languages or complex data in mind with which it might fail?

  • Comment on Re^2: length() miscounting UTF8 characters?

Replies are listed 'Best First'.
Re^3: length() miscounting UTF8 characters?
by amon (Scribe) on Apr 28, 2014 at 10:10 UTC

    The problems with length are not around bytes vs. characters, but that length counts code points. Many logical characters are composed from multiple code points, and some logical characters have multiple representations in Unicode.

    For example, consider “á” (U+00E1 latin small letter a with acute). The same logical character could be composed of two codepoints: “á:” (U+0061 latin small letter a, U+0301 combining acute accent). So while they produce the same visual output (the same grapheme), the strings containing these would have different lengths.

    So when dealing with Unicode text, it's important to think about which length you need: byte count, codepoint count, or count of graphemes (visual characters), or the actual width (there are various characters that are not one column wide – the tab, unprintable characters, and double-width characters e.g. from far-eastern scripts come to mind). The script in a previous reply takes these different counts into account.

    The issue of multiple encodings for one logical character should also be kept in mind when comparing strings (testing for equality, matching, …). In general, you should normalize the input (usually the fully composed form for output, and the fully decomposed form for internal use) before trying to determine whether two strings match.

      Right, thanks again! I hadn't thought about codepoints vs. characters, but I'll keep this in mind; combining accents and other diacritics in particular I might well encounter.

      Searching CPAN shows that there's a module for this, Unicode::Normalize, which I'll look into.

      The problems with length are not around bytes vs. characters, but that length counts code points. Many logical characters are composed from multiple code points

      1. What you call "logical character" is an "extended grapheme cluster", which I abbreviate to "grapheme".

      2. length doesn't count code points. length always counts characters (string elements). It has no idea what those characters are as that information is neither available nor needed. They are just 32-bit or 64-bit numbers to length. They could be bytes. They could be Unicode code points. But they aren't going to be graphemes (visual character) as there is no existing system to encode graphemes in a single number.

Re^3: length() miscounting UTF8 characters?
by farang (Chaplain) on Apr 28, 2014 at 14:28 UTC

    the perldoc entry for length (which I checked beforehand to make sure it wouldn't count bytes -- hence my confusion)
    It "normally deals in logical characters", but its logic doesn't cover all the intricacies of unicode.

    Do you have any specific languages or complex data in mind with which it might fail?
    Yes, Thai language is the main one I'm involved with. The modified script below shows that length counts diacriticals in Thai, which may or may not be what is wanted, and is inconsistent with the results for Latin diacriticals in your dataset, which length isn't counting separately. I'm using pre tags so that the Thai will display correctly and shortened lines to facilitate copy/paste.
    #!/usr/bin/env perl
    use warnings;
    use v5.14;
    use Unicode::Normalize qw/NFD/;
    binmode STDOUT, 'utf8';
    binmode DATA, 'encoding(utf-8)';
     
    while (<DATA>) {
        chomp;
        print $_, ': ';
        s/[A-Za-z]//g;
        my $alphacount = () = /\p{Alpha}/g;
        say "non-(A-Za-z) symbols <$_>", 
            " contain $alphacount", 
            " alphabetic characters and ",
            getdia($_), " diacritical chars.";
        say "length() thinks there are ", 
            length, " characters\n";
    }
    
    sub getdia {
        my $normalized = NFD($_[0]);
        my $diacount = () = 
            $normalized =~ /\p{Dia}/g;
        return $diacount;
    }
    
    __DATA__
    เป็น
    ผู้หญิง
    เมื่อวันก่อน
    æðaber
    æðahnútur
    æðakölkun
    

      Intricacies - that's putting it mildly! Right now it feels like the more I learn about Unicode, the less I know. ;)

      I'll make a mental note to avoid length when dealing with Unicode data, and to normalize strings before working with them.

Re^3: length() miscounting UTF8 characters?
by Jim (Curate) on Apr 28, 2014 at 18:17 UTC
    In what sense is using length to count Unicode characters a bug waiting to happen, though?

    It's a "bug waiting to happen" when you try to make meaningful inferences about Unicode text by computing the size in bytes of the text in a specific Unicode character encoding scheme (e.g., UTF-8). This is what another monk was hinting at doing earlier in this thread when he or she suggested "dividing by two." That's a bug waiting to happen.

    In general, when dealing with Unicode text, you're much more likely to need to know the numbers of code points in a string, or the numbers of graphemes in it ("extended grapheme clusters" in Unicode standardese). However, there are situations in which you might need to know the length in bytes of a Unicode string in some specific encoding. An example of this is needing to store character data in a database column with a capacity measured in bytes rather than in Unicode code points or graphemes. If you have a character data type column with a capacity of, say, 255 bytes, then the number of UTF-8 encoded Chinese characters you can insert into the column is likely going to be a lot fewer than the number of UTF-8 encoded Latin characters you can insert into the same column. In this case, knowing the size of the string in code points or graphemes won't help you answer the question "Will it fit?" You need the size in bytes.

      OK, thanks again for the detailed reply! That really clears things up.
      Thanks for explaining that /2 issue! I had no idea.

      This thread has been most enlightening.

      ...the majority is always wrong, and always the last to know about it...
      Insanity: Doing the same thing over and over again and expecting different results...

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1084101]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chanting in the Monastery: (2)
As of 2024-04-20 04:21 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found