in reply to detecting non-ascii chars in a string

I tried with Chinese characters, each has three bytes when encoded in utf8, the same as Japanese. Couple of things from my testing (on Win32):

  • Comment on Re: detecting non-ascii chars in a string

Replies are listed 'Best First'.
Re: Re: detecting non-ascii chars in a string
by grantm (Parson) on Dec 05, 2002 at 08:51 UTC

    I think you have something wrong with your test data. The ord() function most definitely does correctly handle multi-byte UTF-8 characters. For example this code:

    use utf8; my $euro = "\x{20AC}"; print ord($euro); print length($euro);

    correctly prints 8364 and 1 (even under 5.6). In UTF-8, the Euro symbol encodes to the three bytes 0xE2 0x82 0xAC but this code:

    use utf8; my $binary = "\xE2\x82\xAC"; print ord($binary); print length($binary);

    prints out 226 and 3. Both strings consist of the same three bytes, but $euro is flagged internally as a UTF-8 string whereas $binary is not.

    Remember, Perl has to deal with both character data and binary data. Just because Perl sees the 0xE2 0x82 0xAC sequence of bytes does not mean it should always treat it as a single character.

    Here are a few ways to enter UTF-8 encoded non-ASCII characters:

    • Use a UTF-8 aware editor (eg: VIM) and just type them in to string literals (or even variable names)
    • Use the Unicode escape sequence \x{XXXX} in a double quoted string (see above)
    • Read some data in from an XML file using a proper XML Parser module

    For more details see the utf8 man page.