in reply to Re^3: Interventionist Unicode Behaviors
in thread Interventionist Unicode Behaviors

So, the "quasi-ambiguous" nature of bytes/characters in the \x80-\xff (U0080-U00FF) range seems deeper, subtler, more strange than I expected: for this set, perl's internal representation is still single-byte, not really utf8.

It may be either single byte or UTF8, depending on your environment (pragmas). This is NO PROBLEM if you properly decode all your input, and encode all your output. This is not a bug, but a feature that is much needed for backwards compatibility with old code.

But if it were set to ":utf8" before the first print statment, the two outputs would again be different, but in a different way, and the first one would be "wrong":

Before the "_utf8_on", which I stress is a BAD IDEA, the string is latin-1. It's converted to UTF-8 as the binmode requested: C3 becomes C3 83 and A9 becomes C2 A9, etcetera. With the "_utf8_on" you tell Perl that, no, it's not latin-1, but UTF-8. And since that matches the output encoding, Perl no longer has any need to convert anything.

In other words, first the string is "résumé\n", which when printed is encoded into UTF-8 as 72 C3 83 C2 A9 73 75 6d C3 83 C2 A9 0A, then someone messes with the internals and all of a sudden the string is "résumé\n", already UTF-8 encoded as 72 C3 A9 73 75 6d C3 A9 0A. (Two digits per byte, one underline per character)

Juerd # { site => 'juerd.nl', do_not_use => 'spamtrap', perl6_server => 'feather' }

  • Comment on Re^4: Interventionist Unicode Behaviors

Replies are listed 'Best First'.
Re^5: Interventionist Unicode Behaviors
by DrHyde (Prior) on Sep 14, 2006 at 10:09 UTC
    This is NO PROBLEM if you properly decode all your input, and encode all your output.
    And here is where the Unicode-istas go wrong. Every single piece of software on this 'ere machine - and indeed all the machines I use regularly - was packaged well after Unicode became fashionable. In fact, a great deal of it has either been written from scratch or at least received patches, often large ones, since Unicode became fashionable. And yet Unicode doesn't "Just Work". It should, and requiring me to dick about just so I can see non-ASCII characters reliably is a bug.

      The only way of having Unicode/UTF-8 work automatically, by default, without being explicit about it, is assuming that every string is UTF-8 encoded. Such a naive view of the world would have broken most of the gazillion Perl programs and modules that already existed, and would make it hard to ever pick a new default: the iso-8859-1 problem all over again.

      I, for one, am very happy that Perl chose to implement Unicode, not UTF-8, and to implement character sets, not UTF-8. As a result, we do get UTF-8 in a very simple and straightforward way, without breaking backwards and future compatibility.

      Through its character encoding framework, Perl has reached a much higher level of Unicode support than any other dynamic language has so far. All this, without introducing types or assuming anything.

      Joel Spolsky is absolutely right when he writes It does not make sense to have a [text] string without knowing what encoding it uses.. And so, we shouldn't assume any particular character set. Well, we must assume iso-8859-1 by default because in practice, Perl (and many CPAN modules) has always done so, and we want to maintain compatibility. And because the codepoints of the incompatible bytes are so nicely equivalent that we can safely upgrade these strings.

      Character encodings can never "Just Work". That's not because of Perl, but because of the rest of the world. More specifically, because a lot of (incompatible) character encodings exist. That's tough, and we have to live with it. Fortunately, Perl makes that easy.

      Juerd # { site => 'juerd.nl', do_not_use => 'spamtrap', perl6_server => 'feather' }