Your test script perfectly demonstrates that there is nothing different in this case about the actual STDOUT and STDERR filehandles. The difference lies in the functions used to send them the output. print just prints, but warn does a lot more besides: appends line numbers, substitutes a default phrase if none given and can be overridden by SIG{__WARN__}
We can guess therefore that it is something else which warn is doing behind the scenes to set its encoding to utf-8 and thus avoiding the warning. This is the same conclusion reached in this blog post and seems the likeliest reason. perlunifaq also contains this little gem:
It's good that you lost track, because you shouldn't depend on the internal format being any specific encoding. But since you asked: by default, the internal format is either ISO-8859-1 (latin-1), or utf8, depending on the history of the string. On EBCDIC platforms, this may be different even.
We can conclusively show that warn does something to the string by overriding it. eg:
use strict; use utf8; warn q#warn: ϗblah頁#; $SIG{__WARN__} = sub { print @_ }; warn q#warn: ϗblah頁#;
(As in the OP those should be the actual unicode chars) Here, the second warn generates the wide character error whereas the first does not. To know more about what the default warn handler is doing you would have to start digging in the perl internals.
For other readers: this is all the case under v5.10.1 as reported by peterp, other versions may vary.
In reply to Re: Unexpectedly no wide char error when stderr points at stdout
by hippo
in thread Unexpectedly no wide char error when stderr points at stdout
by peterp
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |