in reply to Re^5: Interventionist Unicode Behaviors
in thread Interventionist Unicode Behaviors

The only way of having Unicode/UTF-8 work automatically, by default, without being explicit about it, is assuming that every string is UTF-8 encoded. Such a naive view of the world would have broken most of the gazillion Perl programs and modules that already existed, and would make it hard to ever pick a new default: the iso-8859-1 problem all over again.

I, for one, am very happy that Perl chose to implement Unicode, not UTF-8, and to implement character sets, not UTF-8. As a result, we do get UTF-8 in a very simple and straightforward way, without breaking backwards and future compatibility.

Through its character encoding framework, Perl has reached a much higher level of Unicode support than any other dynamic language has so far. All this, without introducing types or assuming anything.

Joel Spolsky is absolutely right when he writes It does not make sense to have a [text] string without knowing what encoding it uses.. And so, we shouldn't assume any particular character set. Well, we must assume iso-8859-1 by default because in practice, Perl (and many CPAN modules) has always done so, and we want to maintain compatibility. And because the codepoints of the incompatible bytes are so nicely equivalent that we can safely upgrade these strings.

Character encodings can never "Just Work". That's not because of Perl, but because of the rest of the world. More specifically, because a lot of (incompatible) character encodings exist. That's tough, and we have to live with it. Fortunately, Perl makes that easy.

Juerd # { site => 'juerd.nl', do_not_use => 'spamtrap', perl6_server => 'feather' }

  • Comment on Re^6: Interventionist Unicode Behaviors