in reply to Re: CSV nightmare
in thread CSV nightmare

Text::CSV might soon be extended with a layer that deals with encodings, somewhat like this:

use Text::CSV::Encoded; my $csv = Text::CSV::Encoded->new ({ encoding => "utf-8", # Both in and out encoding_in => "utf-16le", # Only the input encoding_out => "cp1252", # Only the output });

Until then, I think

binmode STDOUT, ":utf8"; my $csv = Text::CSV_XS->new ({ binary => 1 }); open my $fh, "<:encoding(utf-16le)", $file or die "$file: $!"; while (my $row = $csv->getline ($fh)) { print $row->[4]; }

should work


Enjoy, Have FUN! H.Merijn

Replies are listed 'Best First'.
Re^3: CSV nightmare (utf8 w/ csv_xs)
by ikegami (Patriarch) on Jun 03, 2008 at 10:36 UTC

    It's already been covered that it should be

    open my $fh, "<:raw:encoding(utf-16):crlf:utf8", $file or die "$file: $!";

    or more precisely,

    open my $fh, "<:raw:encoding(ucs-2le):crlf:utf8", $file or die "$file: $!"; read($fh, my $bom='', 1);

    And no, it doesn't work. Not if the data contains any non-ASCII characters, at least, but that's the whole point of this exercise. The UTF8 flag gets turned off, so the UTF-8 encoding of the characters is treated as iso-latin-1.

    For example, if a field contains <"é">, Text::CSV_XS returns the two characters <é> instead of <é>. (I'm using angled brackets to quote to avoid confusion with the double-quotes in the CSV file.)

    For example, if a field contains <"♠">, Text::CSV_XS returns the three characters <♣> instead of <♠>.

    The flag needs to be reinstated, so it should be:

    use Encode qw( _utf8_on ); my $csv = Text::CSV_XS->new ({ binary => 1 }); # UTF-16 or UCS-2 file with BOM and CRLF or LF line endings. open my $fh, "<:raw:encoding(utf-16):crlf:utf8", $file or die "$file: $!"; while (my $row = $csv->getline ($fh)) { # Fix inability of CSV_XS to handle UTF8 strings. _utf8_on($_) for @$row; print $row->[4]; }

    There is at least one other problem with treating characters encoded using UTF-8 no differently then characters encoded using iso-latin-1 as Text::CSV_XS does.

    If any of eol, sep_char, etc is passed a string with the UTF8 flag off and it contains a character in [\x80-\xFF], Text::CSV_XS can generate false positives. However, this is unlikely to ever happen.

    Text::CSV might soon be extended with a layer that deals with encodings

    I don't see the point, since Text::CSV doesn't open any file handles. All it needs to do is respect the UTF8 flag on strings it receives via getline, eol, sep_char, etc. Currently (well, 0.34 and presumably 0.45), it ignores it.

      Text::CSV_XS doesn't do anything with encodings internally, and reads bytes, which is also the reason why EBCDIC is so damn hard to implement in the current structure.

      That said, the user is resonsible for the encoding/decoding of the data/fields, as CSV files have no way of telling that to the parser.

      _utf8_on ($_) is NOT the way to go. Please read Unicode advice and Perl Unicode totorial for the reasons.

      while (my $row = $csv->getline ($fh)) { # Fix inability of CSV_XS to handle UTF8 strings. utf8::decode ($_) for @$row; print $row->[4]; }

      As a proof of concept, I tried something more simple in the example below

      #!/pro/bin/perl use strict; use warnings; use Text::CSV_XS; use Encode qw( encode decode ); my $csv = Text::CSV_XS->new ({ binary => 1 }); my $file = "test.csv"; open my $fh, ">:encoding(utf-16)", $file or die "$file: $!"; print $fh join ",", "\x{0073}e\x{00f1}\x{00f3}\x{0159}", 123, "\x{00c5}\x{0142}\x{00e9}\x{0161}\x{0171}\x{0146}", "\r\n"; close $fh; binmode STDOUT, ":utf8"; open $fh, "<:raw:encoding(utf-16)", $file or die "$file: $!"; while (my $row = $csv->getline ($fh)) { print join "," => @$row, "\n"; utf8::decode ($_) for @$row; print join "," => @$row, "\n"; }

      To show that test.csv now has a BOM:

      $ od -t x1 test.csv 0000000 fe ff 00 73 00 65 00 f1 00 f3 01 59 00 2c 00 31 0000020 00 32 00 33 00 2c 00 c5 01 42 00 e9 01 61 01 71 0000040 01 46 00 2c 00 0d 00 0a 0000050

      And the cript output was:

      señóÅ,123,ÃÅéšűÅ,,
      señóř,123,Åłéšűņ,,
      

      The second line was exactly what I was expecting.


      Enjoy, Have FUN! H.Merijn

        I see you reverted the IO layers, introducing an error.

        >perl 689902.pl UTF-16:Partial character at c:/Progs/perl588/lib/IO/Handle.pm line 413 +. UTF-16:Partial character at c:/Progs/perl588/lib/IO/Handle.pm line 413 +. >debug test.csv -rcx CX 0027 : -d100 l27 0B04:0100 FE FF 00 73 00 65 00 F1-00 F3 01 59 00 2C 00 31 ...s.e... +..Y.,.1 0B04:0110 00 32 00 33 00 2C 00 C5-01 42 00 E9 01 61 01 71 .2.3.,... +B...a.q 0B04:0120 01 46 00 0D 00 0D 0A .F..... -q

        Note the last five bytes. You need either
        :raw:encoding(utf-16) with \r\n
        or
        :raw:encoding(utf-16):crlf:utf with \n
        for *both* input and output.

        Also, the byte order is backwards — should be UTF-16le — but that doesn't matter for this discussion.

        _utf8_on ($_) is NOT the way to go.

        I used it without thinking because it's the complement of the (implicit) _utf8_off your module does, but utf8::decode is indeed better.

        That said, the user is resonsible for the encoding/decoding of the data/fields, as CSV files have no way of telling that to the parser.

        The data is already decoded by :encoding before being passed to the parser. The parser is told this via the UTF8 flag, but the parser simply doesn't check that flag (csv->bptr = SvPV (csv->tmp, csv->size doesn't have a corresponding SvPV (csv->tmp)).

        The problem can be solved by simply dealing only with UTF-8. Use utf8::upgrade on all strings coming into the parser, and use utf8::decode on all strings coming out. The catch with that naïve method is the performance cost of dealing with UTF-8 even when it's not necessary.

        Wow! I was going to write a .t file for your module, but seems you already added UTF8 support!

        We probably need many more tests to check if all edge-cases are covered. See t/50_utf8.t.

        I'll do that for you if you want.