in reply to Re^4: How does the built-in function length work?
in thread How does the built-in function length work?

perl has to assume a character encoding.
Not at all

Of course it has to. My example used a binary operation (IO), and then a text operation. Since the text operation implies character context, the byte needs to be interpreted in some way. And this interpretation happens to Latin-1.

"\x{E4}" =~ /\w/

A string literal is not the same as IO; my explanation only applies to my example, not yours.

In your example, the string is generate from inside perl, and can thus be treated transparently to any encoding. When the string is coming from the outside, it is transported as a stream of bytes (because STDIN is byte stream on UNIX platforms), and when Perl treats it as a text string, some interpretation has to happen.

To come back to my previous example, executed in bash:

# | the UNIX pipe transports bytes, not # | codepoints. So Perl sees the byte E4 $ echo -e "\xE4"|perl -wE 'say <> ~~ /\w/' # ^^^^^^^ a text operation # sees the codepoint U+00E4

So, at one point we have a byte, and later a codepoint. The mapping from bytes code codepoints is what an encoding does, so Perl needs to use one, and it uses ISO-8859-1. Implicitly, because I never said decode('ISO-8859-1', ...)

So I cannot see why you insist that Perl never implicitly uses ISO-8859-1, when I've provided an example that demonstrates just that.

Or what do you think it is, if not ISO-8859-1?
A Unicode code point, regardless of the state of the UTF8 flag.

But it was a byte at level of the UNIX pipe. Now it is a code point. What mechanism changed it from a byte to a codepoint, if not (implicit) decoding as ISO-8859-1?

Since ISO-8859-1 provides a trivial mapping between the first 255 bytes and code points, it's really more of an interpretation than an actual decoding step, but it's there nonetheless.

Replies are listed 'Best First'.
Re^6: How does the built-in function length work?
by ikegami (Patriarch) on Dec 03, 2011 at 07:53 UTC

    Since the text operation implies character context, the byte needs to be interpreted in some way.

    Yes, as a Unicode code point.

    A string literal is not the same as IO; my explanation only applies to my example, not yours.

    Both readline and the string literal create the same string, so that only makes sense if you say that readline is the one that does the iso-8859-1 decoding. Is that what you're saying?

    (I hope not, cause it's preposterous to say that copying bytes from disk to memory is a decoding operation. In binmode no less!)

    But it was a byte at level of the UNIX pipe. Now it is a code point. What mechanism changed it from a byte to a codepoint, if not (implicit) decoding as ISO-8859-1?

    None. There's no now and then; it's always a code point, and it was always stored in a byte.

    The mapping from bytes code codepoints is what an encoding does

    I don't call the following iso-8859-1 decoding:

    UV codepoint = s[i];