in reply to Handling malformed UTF-16 data with PerlIO layer

Why don't you fix the bad files instead of having your program handle them?
#!/usr/bin/perl # usage: # rem_surrogate.pl < infile > outfile use strict; use warnings; binmode STDIN; # Disable :crlf binmode STDOUT; # Disable :crlf my $read_size = 16*1024; my $buf = ''; for (;;) { my $rv = read(STDIN, $buf, $read_size, length($buf)); die("$!\n") if !defined($rv); last if !$rv; $_ = substr($buf, 0, int(length($buf)/2)*2, ''); s/\G(.)(?:[\xD8-\xDF]|(.))/ defined($2) ? $1.$2 : "\xFD\xFF" /esg; print; } print("\xFD\xFF") if length($buf);

Replies are listed 'Best First'.
Re^2: Handling malformed UTF-16 data with PerlIO layer
by almut (Canon) on Oct 27, 2008 at 22:38 UTC
    Why don't you fix the bad files instead of having your program handle them?

    ...mostly because I'd rather avoid having to get down to the encoding nitty-gritties, if there is some 'proper' way of doing it with Perl's built-in encoding support.  For example, the ad-hoc approach you've shown would also replace valid surrogate pairs, which I'd rather keep, if possible (just in case). Sure, the regex could presumably be fixed to handle this (using lookahead), but this would be kind of reinventing the wheel...  OTOH, it looks like the best workaround for the issue so far — So, thanks!

      Lookahead alone won't do because the pair might be cut into two reads. It does make things more complicated.

      I don't know anything about surrogates. I assumed the following:

      • hi followed by lo = ok
      • hi not followed by lo = bad
      • lo not preceeded by hi = bad
      #!/usr/bin/perl # usage: # fix_surrogates.pl < infile > outfile # Hi Surrogate: D800-DBFF # Lo Surrogate: DC00-DFFF use strict; use warnings; binmode STDIN; # Disable :crlf binmode STDOUT; # Disable :crlf my $read_size = 16*1024; my $valid_pat = qr/ .[^\xD8-\xDF] | .[\xD8-\xDB].[\xDC-\xDF] /xs; my $invalid_pat = qr/ .[\xDC-\xDF] | .[\xD8-\xDB](?=.[^\xDC-\xDF]) /xs; my $ibuf = ''; my $obuf = ''; for (;;) { my $rv = read(STDIN, $ibuf, $read_size, length($ibuf)); die("$!\n") if !defined($rv); last if !$rv; for ($ibuf) { /\G ($valid_pat+) /xgc && do { $obuf .= $1; }; /\G $invalid_pat /xgc && do { $obuf .= "\xFD\xFF"; redo }; } print($obuf); $ibuf = substr($ibuf, pos($ibuf)||0); $obuf = ''; } $ibuf =~ s/..?/\xFD\xFF/sg; print($ibuf);

      Update: Tested. Fixed character class that wasn't negated as it should have been.

      >type testdata.pl binmode STDOUT; my $hi = "\xF4\xDB"; my $lo = "\xE2\xDE"; print "a\0" . $hi . $lo . "b\0" . "\n\0", "c\0" . $hi . "c\0" . "d\0" . "\n\0", "e\0" . $lo . "f\0" . "g\0" . "\n\0"; >perl testdata.pl | perl fix_surrogates.pl | perl -0777 -pe"BEGIN { bi +nmode STDIN, ':encoding(UTF-16le)'; binmode STDOUT, ':encoding(US-ASC +II)' }" "\x{10d2e2}" does not map to ascii, <> chunk 1. "\x{fffd}" does not map to ascii, <> chunk 1. "\x{fffd}" does not map to ascii, <> chunk 1. a\x{10d2e2}b c\x{fffd}cd e\x{fffd}fg

        Thank you very much, again, for actually working out the details. I think I'll go with that approach — unless someone has a better suggestion...

        That said, my gut feelings of unease still hold about reimplementing a parser for an encoding I possibly have not fully understood (e.g. what are private-use high-surrogates, really? ...and who knows what else there might be).