http://qs1969.pair.com?node_id=11148358

BernieC has asked for the wisdom of the Perl Monks concerning the following question:

I have a file that *should* be all ISO-latin, but the program that created it seems to sprinkle UTF-8 characters round in it. For example, the file begins

ef bb bf 49 6d 70 6f 46 69 72 73 74 20 4e 61 6d

later on there's

2c 2c 2c 2c 2c 2c 2c 2c 2c 2c 2c 2c 22 ef bb bf 39 35 33 2d 31 35 31 33 0d 0a 53 63 6f 74 74 20

And that kind of thing pops up all throughout the file. I tried "cleaning" it by running every line through

$line = Encode::encode("ISO-8859-1", $line);

But it didn't help. I also tried

$text =~ s/^\xef\xbb\xbf//g ;

It didn't help either. Is there some way to get rid of it all?

Replies are listed 'Best First'.
Re: getting rid of UTF-8
by haukex (Archbishop) on Nov 24, 2022 at 21:49 UTC
    I have a file that *should* be all ISO-latin, but the program that created it seems to sprinkle UTF-8 characters round in it.

    If that's the case, then that program is horribly broken and I would recommend trying to see what you can do to fix that. Anyway, do you have any sample data that shows both UTF-8 and Latin-1 data?

    EF BB BF is the Byte order mark encoded as UTF-8, it can be present at the beginning of UTF-8 encoded files. The fact that in your second sample it appears after a series of commas could mean that the program is trying to write a CSV file and used a UTF-8 encode function that adds the BOM on individual fields, or it slurped a file with the wrong encoding and used that as the contents of the field. If this guess is correct, then maybe a solution would be to first parse the CSV file and then individually decode the fields with different encodings, though I would consider that a pretty ugly workaround, plus you'd have to know the encodings (or guess them, which is a workaround in itself).

    Can you tell us more about this program, the file format it is outputting, and give more example data that shows the problem?

    Update just to address the question in the title and node: simply clobbering any non-ASCII characters without understanding the input data is almost never the right solution, because you'll almost certainly also delete important characters. Instead, first fix the encoding problems, and if you then really want to ASCIIfy your (correctly decoded!) Unicode data, you can use e.g. Text::Unidecode.

      The problem is out-of-support and I've used it for years. And it works perfectly... except.. as you deduced it puts that stuff in when it is exporting to a CSV file. I don't know how to upload the broken data. When I open it in my text editor, it has no problem with it but when I go and save the file the UTF-8 is all still there. I loaded it into Excel, it loaded fine and showed no anomalies, but when I saved it from excel all the UTF-8 stuff was still there. There's no pattern I can tell for why there are the byte-order-markers strewn through the file.

      What should I do either to upload something to here with example probelmatic stuff and/or be able just to brute-force fix it?

        The issue with the sample data you posted is that it is entirely ASCII with some some BOMs in it, but from your description it sounded like you could have other Latin-1 (or CP1252 or Latin-9) or UTF-8 characters in it, which you don't show.

        What should I do either to upload something to here with example probelmatic stuff

        A hex dump of the raw bytes like you showed above is fine. See also my node here.

        and/or be able just to brute-force fix it?

        Iff your data consists entirely a single-byte encoding like the ones I named above, and the only UTF-8 characters that appear in it are BOMs, then the regex you showed in the root node may be acceptable. However, I very much expect that if there's a BOM, then other UTF-8 characters can be present, and if those are mixed with single-byte-encodings, or you've got double-encoded characters, you'll have a tough time picking that apart. But again, you'd need to show us more representative data.

        Edit: Typo fixes.

Re: getting rid of UTF-8
by kcott (Archbishop) on Nov 25, 2022 at 07:14 UTC

    G'day BernieC,

    Regex issues:

    • Your first regex, s/^\xef\xbb\xbf//g, anchors to the start of the string: later ef bb bf sequences will not be removed.
    • Your second regex, s/\xef\xbb\xbf//, has no /g modifier: only the first ef bb bf sequence will be removed.

    What you need is s/\xef\xbb\xbf//g:

    $ perl -Mstrict -Mwarnings -E ' my $x = "\x{ef}\x{bb}\x{bf}123,,,\x{ef}\x{bb}\x{bf}456"; say "Full string:"; system "echo $x | hexdump -Cv"; $x =~ s/\xef\xbb\xbf//g; say "All BOM sequences removed:"; system "echo $x | hexdump -Cv"; ' Full string: 00000000 ef bb bf 31 32 33 2c 2c 2c ef bb bf 34 35 36 0a |...123,,, +...456.| 00000010 All BOM sequences removed: 00000000 31 32 33 2c 2c 2c 34 35 36 0a |123,,,456 +.| 0000000a

    To remove other characters:

    • Non ISO-8859-1: y/\x00-\xff//cd
    • Non 7-bit ASCII: y/\x00-\x7f//cd

    — Ken

Re: getting rid of UTF-8
by ikegami (Patriarch) on Nov 25, 2022 at 14:09 UTC

    $text =~ s/^\xef\xbb\xbf//g does remove the offending bytes if your string contains the characters you described. (Well, it'll delete the leading sequence, and removing the ^ will have it delete the others too.)

    Since you repeatedly claim it doesn't, your data is different than you describe, and we can't help you until you provide a better description of your data (e.g. the output of sprintf "%vX", $string).

    fyi, EF BB BF is the UTF-8 encoding of U+FEFF, which is the Byte Order Mark if at the start of the file, and the Zero Width No-Break Space elsewhere.

      the Zero Width No-Break Space elsewhere.

      deprecated, use U+2060 instead

        Good to know.

        I don't think it was used as a word joiner. I think the presence of U+FEFF is explained by the concatenation of a BOM-prefixed string to another (the very kind of error that lead to U+2060 WORD JOINER being the new ZWNBSP).