Re: getting rid of UTF-8
by haukex (Archbishop) on Nov 24, 2022 at 21:49 UTC
|
I have a file that *should* be all ISO-latin, but the program that created it seems to sprinkle UTF-8 characters round in it.
If that's the case, then that program is horribly broken and I would recommend trying to see what you can do to fix that. Anyway, do you have any sample data that shows both UTF-8 and Latin-1 data?
EF BB BF is the Byte order mark encoded as UTF-8, it can be present at the beginning of UTF-8 encoded files. The fact that in your second sample it appears after a series of commas could mean that the program is trying to write a CSV file and used a UTF-8 encode function that adds the BOM on individual fields, or it slurped a file with the wrong encoding and used that as the contents of the field. If this guess is correct, then maybe a solution would be to first parse the CSV file and then individually decode the fields with different encodings, though I would consider that a pretty ugly workaround, plus you'd have to know the encodings (or guess them, which is a workaround in itself).
Can you tell us more about this program, the file format it is outputting, and give more example data that shows the problem?
Update just to address the question in the title and node: simply clobbering any non-ASCII characters without understanding the input data is almost never the right solution, because you'll almost certainly also delete important characters. Instead, first fix the encoding problems, and if you then really want to ASCIIfy your (correctly decoded!) Unicode data, you can use e.g. Text::Unidecode.
| [reply] [d/l] |
|
| [reply] |
|
The issue with the sample data you posted is that it is entirely ASCII with some some BOMs in it, but from your description it sounded like you could have other Latin-1 (or CP1252 or Latin-9) or UTF-8 characters in it, which you don't show.
What should I do either to upload something to here with example probelmatic stuff
A hex dump of the raw bytes like you showed above is fine. See also my node here.
and/or be able just to brute-force fix it?
Iff your data consists entirely a single-byte encoding like the ones I named above, and the only UTF-8 characters that appear in it are BOMs, then the regex you showed in the root node may be acceptable. However, I very much expect that if there's a BOM, then other UTF-8 characters can be present, and if those are mixed with single-byte-encodings, or you've got double-encoded characters, you'll have a tough time picking that apart. But again, you'd need to show us more representative data.
Edit: Typo fixes.
| [reply] |
|
|
|
Re: getting rid of UTF-8
by kcott (Archbishop) on Nov 25, 2022 at 07:14 UTC
|
$ perl -Mstrict -Mwarnings -E '
my $x = "\x{ef}\x{bb}\x{bf}123,,,\x{ef}\x{bb}\x{bf}456";
say "Full string:";
system "echo $x | hexdump -Cv";
$x =~ s/\xef\xbb\xbf//g;
say "All BOM sequences removed:";
system "echo $x | hexdump -Cv";
'
Full string:
00000000 ef bb bf 31 32 33 2c 2c 2c ef bb bf 34 35 36 0a |...123,,,
+...456.|
00000010
All BOM sequences removed:
00000000 31 32 33 2c 2c 2c 34 35 36 0a |123,,,456
+.|
0000000a
To remove other characters:
-
Non ISO-8859-1: y/\x00-\xff//cd
-
Non 7-bit ASCII: y/\x00-\x7f//cd
| [reply] [d/l] [select] |
Re: getting rid of UTF-8
by ikegami (Patriarch) on Nov 25, 2022 at 14:09 UTC
|
$text =~ s/^\xef\xbb\xbf//g does remove the offending bytes if your string contains the characters you described. (Well, it'll delete the leading sequence, and removing the ^ will have it delete the others too.)
Since you repeatedly claim it doesn't, your data is different than you describe, and we can't help you until you provide a better description of your data (e.g. the output of sprintf "%vX", $string).
fyi, EF BB BF is the UTF-8 encoding of U+FEFF, which is the Byte Order Mark if at the start of the file, and the Zero Width No-Break Space elsewhere.
| [reply] [d/l] [select] |
|
| [reply] |
|
| [reply] |