in reply to Re^3: CSV nightmare (utf8 w/ csv_xs)
in thread CSV nightmare

Text::CSV_XS doesn't do anything with encodings internally, and reads bytes, which is also the reason why EBCDIC is so damn hard to implement in the current structure.

That said, the user is resonsible for the encoding/decoding of the data/fields, as CSV files have no way of telling that to the parser.

_utf8_on ($_) is NOT the way to go. Please read Unicode advice and Perl Unicode totorial for the reasons.

while (my $row = $csv->getline ($fh)) { # Fix inability of CSV_XS to handle UTF8 strings. utf8::decode ($_) for @$row; print $row->[4]; }

As a proof of concept, I tried something more simple in the example below

#!/pro/bin/perl use strict; use warnings; use Text::CSV_XS; use Encode qw( encode decode ); my $csv = Text::CSV_XS->new ({ binary => 1 }); my $file = "test.csv"; open my $fh, ">:encoding(utf-16)", $file or die "$file: $!"; print $fh join ",", "\x{0073}e\x{00f1}\x{00f3}\x{0159}", 123, "\x{00c5}\x{0142}\x{00e9}\x{0161}\x{0171}\x{0146}", "\r\n"; close $fh; binmode STDOUT, ":utf8"; open $fh, "<:raw:encoding(utf-16)", $file or die "$file: $!"; while (my $row = $csv->getline ($fh)) { print join "," => @$row, "\n"; utf8::decode ($_) for @$row; print join "," => @$row, "\n"; }

To show that test.csv now has a BOM:

$ od -t x1 test.csv 0000000 fe ff 00 73 00 65 00 f1 00 f3 01 59 00 2c 00 31 0000020 00 32 00 33 00 2c 00 c5 01 42 00 e9 01 61 01 71 0000040 01 46 00 2c 00 0d 00 0a 0000050

And the cript output was:

señóÅ,123,ÃÅéšűÅ,,
señóř,123,Åłéšűņ,,

The second line was exactly what I was expecting.


Enjoy, Have FUN! H.Merijn

Replies are listed 'Best First'.
Re^5: CSV nightmare (utf8 w/ csv_xs)
by ikegami (Patriarch) on Jun 03, 2008 at 19:26 UTC

    I see you reverted the IO layers, introducing an error.

    >perl 689902.pl UTF-16:Partial character at c:/Progs/perl588/lib/IO/Handle.pm line 413 +. UTF-16:Partial character at c:/Progs/perl588/lib/IO/Handle.pm line 413 +. >debug test.csv -rcx CX 0027 : -d100 l27 0B04:0100 FE FF 00 73 00 65 00 F1-00 F3 01 59 00 2C 00 31 ...s.e... +..Y.,.1 0B04:0110 00 32 00 33 00 2C 00 C5-01 42 00 E9 01 61 01 71 .2.3.,... +B...a.q 0B04:0120 01 46 00 0D 00 0D 0A .F..... -q

    Note the last five bytes. You need either
    :raw:encoding(utf-16) with \r\n
    or
    :raw:encoding(utf-16):crlf:utf with \n
    for *both* input and output.

    Also, the byte order is backwards — should be UTF-16le — but that doesn't matter for this discussion.

    _utf8_on ($_) is NOT the way to go.

    I used it without thinking because it's the complement of the (implicit) _utf8_off your module does, but utf8::decode is indeed better.

    That said, the user is resonsible for the encoding/decoding of the data/fields, as CSV files have no way of telling that to the parser.

    The data is already decoded by :encoding before being passed to the parser. The parser is told this via the UTF8 flag, but the parser simply doesn't check that flag (csv->bptr = SvPV (csv->tmp, csv->size doesn't have a corresponding SvPV (csv->tmp)).

    The problem can be solved by simply dealing only with UTF-8. Use utf8::upgrade on all strings coming into the parser, and use utf8::decode on all strings coming out. The catch with that naïve method is the performance cost of dealing with UTF-8 even when it's not necessary.

Re^5: CSV nightmare
by ikegami (Patriarch) on Jun 04, 2008 at 23:19 UTC

    Wow! I was going to write a .t file for your module, but seems you already added UTF8 support!

    We probably need many more tests to check if all edge-cases are covered. See t/50_utf8.t.

    I'll do that for you if you want.

      Yes, please :)

      As it still builds under 5.005, but utf8 issues are only tested under 5.8 and up, you can use any 5.8 feature you like.

      Latest code always available here.


      Enjoy, Have FUN! H.Merijn