Re^2: Special character not being captured
by vr (Curate) on Jun 18, 2019 at 12:49 UTC
|
>perl -MEncode=decode -MData::Dump=dd -E "dd decode q(UTF-8), substr q
+q(\xC3\x86),0,1"
"\x{FFFD}"
but I don't see Lady_Aleena decoding anything.
| [reply] [d/l] [select] |
|
|
decode expects the input to be in UTF-8, but you supplied the byte \xC3. It doesn't represent a UTF-8 sequence, so it's decoded to the Replacement Character \xFFFD.
You need
Encode::encode("UTF-8", substr(Encode::decode("UTF-8", "\xC3\x86"),0,1
+))
to get UTF-8 Æ back.
map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]
| [reply] [d/l] [select] |
|
|
right, UTF-8 input wasn't decoded, string of octets passed to first_alpha, 1st octet i.e. \xC3 was returned and should have become hash key, but somehow it is Replacement Character. But then, dump as shown (hash of arrays) can not be alpha_hash output, so it's not really SSCE, can only guess.
| [reply] [d/l] [select] |
|
|
|
|
| [reply] [d/l] |
|
|
I suspect something happens after first_alpha, i.e. to its result -- \xC3 transformed to \xFFFD, but it was idle curiosity on my part, tangentially related to your problem (solved by choroba's advice), it's not worth your investigation, don't mind.
| [reply] [d/l] [select] |
Re^2: Special character not being captured
by Lady_Aleena (Priest) on Jun 17, 2019 at 19:57 UTC
|
Please, would you give me your opinion why I did not need to specify encoding while the original data file was encoded as Windows-1252?
No matter how hysterical I get, my problems are not time sensitive. So, relax, have a cookie, and a very nice day!
Lady Aleena
| [reply] |
|
|
Because Windows-1252 encodes Æ the same way as your terminal (or browser) was configured to (e.g. Latin-1 or Windows-1252).
map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]
| [reply] [d/l] |
|
|
First of all, ignore any explanation that mentions latin-1. Perl doesn't know anything about latin-1.
The lack of decoding of inputs plus the lack of encoding of outputs means the bytes were copied through. So if your input was encoded using cp1252, and if this is output to a terminal (or browser) expecting cp1252, it works.[1]
The problem with this approach is that lots of tools expect decoded text (strings of Unicode Code Points), not encoded text (string of cp1252 bytes).
For example,
- /\w/ will fail to work properly.
- uc will fail to work properly.
- length might not do what you want (for some encodings).
- substr might not do what you want (for some encodings).
- In Detail:
Perl expects the source file to be encoded using ASCII (no utf8;) or UTF-8 (use utf8;).[2] That said, when expecting ASCII (no utf8;), bytes outside of ASCII in string literals produce a character with the same value in the resulting string.
For example, say Perl expects ASCII (no utf8;) and it encounters a string literal that contains byte 80. This is illegal ASCII, but it's "€" in cp1252. Perl will produce a string that contains character 80. If you were to later print this out to a terminal expecting cp1252 (without doing any form of encoding), you'd see "€".
- EBCDIC machines expect EBCDIC and UTF-EBCDIC rather than ASCII and UTF-8.
| [reply] [d/l] [select] |
Re^2: Special character not being captured
by Anonymous Monk on Jun 17, 2019 at 20:04 UTC
|
Note that `use utf8;` also applies to the DATA section, so the binmode on *DATA isn't needed. | [reply] |
Re^2: Special character not being captured
by ikegami (Patriarch) on Jun 24, 2019 at 22:07 UTC
|
DATA is the handle used by Perl to read the source file. As a result, use utf8; affects not just the source file, but DATA as well. Specifically, it adds a :utf8 layer to DATA. Since DATA already has a :utf8 layer, so adding :encoding(UTF-8) is incorrect (though harmless).
Furthermore, use open ':std', ':encoding(UTF-8)'; adds :encoding(UTF-8) to not just STDOUT, but also to STDIN and STDERR. (It also causes instances of open in scope to add that layer by default.) And it does so a compile-time. This is usually the better route.
#!/usr/bin/perl
use warnings;
use strict;
use feature qw{ say };
use utf8;
use open ':std', ':encoding(UTF-8)';
say substr('Æon Flux', 0, 1);
say substr <DATA>, 0, 1;
__DATA__
Æon Flux
| [reply] [d/l] [select] |
|
|
| [reply] [d/l] [select] |
Re^2: Special character not being captured
by Lady_Aleena (Priest) on Jun 20, 2019 at 18:15 UTC
|
choroba, It confuses me why when I use make_hash, it returns the correct strings as keys and values without having to specify an encoding; but when I go to get the first character with either the first_alpha subroutine or substr, I suddenly need to specify the encoding. All of these subroutines are in the same module where encoding is not specified anywhere. Some subroutines return the correct strings without having to specify encoding while others do not is confusing.
If this helps, I am including my locale.
me@office:~$ locale
LANG=en_US.UTF-8
LANGUAGE=
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE=C
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=
As an aside, I rewrote first_alpha. The original horrified me. I hope the rewrite is cleaner.
| Original |
Rewrite |
sub first_alpha {
my $alpha = shift;
$alpha = ucfirst($alpha) if $alpha =~ /^\l./;
$alpha =~ s/\s*\b(A|a|An|an|The|the)(_|\s)//xi;
if ($alpha =~ /^\d/) {
$alpha = '#';
}
elsif ($alpha !~ /^\p{uppercase}/) {
$alpha = '!';
}
else {
$alpha =~ s/^(.)(\w|\W)+/$1/;
}
return $alpha;
}
|
sub first_alpha {
my $string = shift;
$string =~ s/\s*\b(A|a|An|an|The|the)(_|\s)//xi;
my $alpha = uc substr($string, 0, 1);
if ($alpha =~ /^\d/) {
$alpha = '#';
}
elsif ($alpha !~ /^\p{uppercase}/) {
$alpha = '!';
}
return $alpha;
}
|
No matter how hysterical I get, my problems are not time sensitive. So, relax, have a cookie, and a very nice day!
Lady Aleena
| [reply] [d/l] [select] |
|
|
> when I go to get the first character (...) I suddenly need to specify the encoding
UTF-8 is a multi-byte encoding. It means that some characters, Æ being one of them, are encoded by more than one byte (in this case, two bytes: 0xC3 0x86). If a string starts with such a character, but Perl doesn't know the encoding, it assumes Latin-1, which is a single byte encoding. First character then corresponds to the first byte only, which is 0xC3. It doesn't have any meaning in UTF-8, so it's transformed into �, the replacement character.
map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]
| [reply] [d/l] |
|
|
One last thing, I've been trying to figure out how to add utf8 to first_alpha, which I posted earlier. I am not having any success with it. So, how should I add it to that subroutine?
No matter how hysterical I get, my problems are not time sensitive. So, relax, have a cookie, and a very nice day!
Lady Aleena
| [reply] [d/l] |
|
|