in reply to Re^4: JSON::XS (and JSON::PP) appear to generate invalid UTF-8 for character in range 127 to 255
in thread JSON::XS (and JSON::PP) appear to generate invalid UTF-8 for character in range 127 to 255
Damn right I disagree. It is not Perl's problem that someone using a function that's documented to check for accidental double-encoding to check if something is valid UTF-8. That's akin to using uc to get the first character of a string. There's nothing Perl can do to stop you from using a function completely unrelated to the one you want to use.Except it's kind of hard to understand what the heck the function is doing. 'flagged as utf8', 'store a string internally'... too many implementation details. Do you expect many people to understand it?
This is the second time this thread you've implied that I maintain that Perl's handling of UTF-8 isn't confusing. That's a lie. The former bugs in Perl (some still present) and the plethora of buggy XS module (because XS is hard!) has led people like you to disseminate misinformation, which has created a self-feeding vicious loop of confused people. I've repeatedly said that Perl should be able to differentiate encoded strings from decoded strings and prevent you from mixing them.Maybe you missed that, ikegami... but I actually never have any problems with mojibake in my Perl code... unlike some other people. I know where these kinds of bugs come from and how to fix them. Works for me, eh?
Speaking of misinformation, improper upgrading doesn't cause double-encoding. Quite the opposite, it causes a string encoded using UTF-8 to become decoded. (Upgrading a strings that isn't encoded using UTF-8 creates a corrupt scalar. perl -MDevel::Peek -MEncode=_utf8_on -we"$_ = qq{\x80}; _utf8_on($_); Dump($_)")I've called it 'upgrading' (in quotes) in honor of utf8::upgrade (perl -MDevel::Peek -CO -E 'my $s = "\xff"; Dump $s; say $s; utf8::upgrade($s); Dump $s; say $s' - note that I don't care one bit how Perl actually does that). Not sure why you even mentioned _utf8_on. Anyway, I really dislike this term 'double encoding', because that implies that the problem is with the encoding, and not the decoding part (encoding needs some decoding first). Why isn't double encoding utf-8 a no-op? Really, just explain it in your own words.
(perl -MEncode=encode -E 'say encode("Latin-1", encode("Latin-1", "\xff")) doesn't seem to do much of anything?)
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^6: JSON::XS (and JSON::PP) appear to generate invalid UTF-8 for character in range 127 to 255
by ikegami (Patriarch) on Dec 07, 2014 at 15:33 UTC | |
by Anonymous Monk on Dec 07, 2014 at 22:50 UTC | |
by ikegami (Patriarch) on Dec 10, 2014 at 07:30 UTC |