I have a string of Japanese text in utf-8 format (I've looked at the bytes and they seem to be correctly addressing the utf-8 code pages). Some of the characters are 3 bytes long (e4 bb 96 - 他).
Encode::decode complains about the wide characters. However, Encode::decode_utf8 doesn't. Using the debugger typing 'x $var' outputs the string to the screen properly (although my terminal can't display the 3 byte characters, it replaces them with boxes).
However, using warn or print causes the string to be mangled. It's also getting mangled when it goes to the database. I'm running perl with -CS, and I've tried setting binmode to utf8 for both STDOUT and STDERR.
I feel like I'm missing something fundamental about these strings, but I keep having to relearn about utf8 encoding every time I run into encoding problems.
Update
Turns out I hadn't looked at the strong close enough. There were \u200b characters in the string that Encode didn't like. Stripping them out resolved the issue.
In reply to Triple Byte UTF8 Characters by rwadkins
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |