I believe, that part of the confusion lays in the badly written modules. Since perl provides function "is_utf8", it is very easy to check what kind of input the user has provided and use appropriate "Encode::encode" or "Encode::decode" to get the desired form.
You seem to equate "is a text string" with "is_utf8 returns true". That's wrong.
Perl has two possible internal formats: Latin-1 and UTF-8. It is perfectly fine for decoded text string to be stored in Latin-1. Decoding it again just because is_utf8 returns false is just wrong.
Example (on a UTF-8 terminal; note that -CS sets the :encoding(UTF-8) layer on STDOUT, among other things):
$ perl -CS -Mstrict -wE 'say "\x{ab}oo\x{bb}"' «oo» $ perl -CS -Mstrict -wE 'say utf8::is_utf8 "\x{ab}oo\x{bb}"' $ # let's convince ourselves that lc() works properly: $ perl -CS -Mstrict -wE 'say "\x{C6}"' Æ $ perl -CS -Mstrict -wE 'say lc "\x{C6}"' æ $ perl -CS -Mstrict -wE 'say utf8::is_utf8 "\x{C6}"' $
Summary: Strings internally stored as Latin 1 can be perfectly fine text strings. Trying to use is_utf8 to determine whether a string holds characters or octects is wrong.
In fact, every string can be seen as a text string (which functions like lc and uc do), though if you forgot to decode the input data, the user will be surprised by the result.
In reply to Re^3: text encodings and perl
by moritz
in thread text encodings and perl
by andal
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |