richb has asked for the wisdom of the Perl Monks concerning the following question:

I use Spreadsheet::ParseExcel to parse a spreadsheet containing testcases and generate a SQL file (insert into table/fields values...) to populate our testcase database.

Today, I tried to use my script to digest a spreadsheet with a variety of Unicode data:Spanish, French, German, Russian, Arabic, Hebrew, Chinese Simplified, Chinese Traditional.

The Unicode data does not come through correctly. For example, Fundacin turns into Fundación. 資產 turns into 資產.

I would bet this is an encoding problem, and further that these different languages use different encodings.

How I tried to fix this:

(1) Created an override of the Spreadsheet::ParseExcel::FmtDefault per What to do when converting Excel-supplied data to Unicode

(2) Opening the spreadsheet with UTF8 encoding

(3) Opening the spreadsheet with UTF16 encoding

(4) Replacing the TextFmt sub in FmtDefault.pm with the "Unicode fix" identified in http://cpanforum.com/threads/1036.

Unfortunately, none of these worked.

Does anyone have any ideas how to solve this (perplexing!) issue? Thanks much.

  • Comment on Handling variety of languages/Unicode characters with Spreadsheet::ParseExcel

Replies are listed 'Best First'.
Re: Handling variety of languages/Unicode characters with Spreadsheet::ParseExcel
by Corion (Patriarch) on Apr 08, 2010 at 21:02 UTC

    When dealing with encodings, you have to control (and check) the whole path your data takes. Most likely, you are not passing the Unicode characters to your database, or your database does not understand the Unicode characters. When retrieving the data from the database, you might retrieve octets that are UTF-8 encoded strings, but you don't decode these octets into proper Unicode characters. Also, when debugging by printing your data to the screen, your terminal might be configured for a different encoding than UTF-8, or you might output your data in a different encoding than UTF-8.

      Ah, sorry, I wasn't clear.

      The issue isn't at the database. When I edit the .SQL file my Perl script produces, I see the mangled data there.

      I didn't even get to the point of running the SQL script to load the data, after seeing the Unicode characters not making it into the .SQL file properly.

        Then, are you sure that your .SQL file is UTF-8, and that the text editor you're using to view it understands UTF-8? Most likely you'll want to tell Perl to encode output to UTF-8:

        open my $sql_file, '>:encoding(UTF-8)', $sql_name or die "Couldn't create '$sql_name': $!";
Re: Handling variety of languages/Unicode characters with Spreadsheet::ParseExcel
by eff_i_g (Curate) on Apr 08, 2010 at 23:51 UTC

      Thanks for the suggestion.

      I created a small test spreadsheet with two entries:

      Fundacin

      ФОРСУНОК

      The Encoding method returns 1 (8bit ASCII or single byte UTF-16) for the Spanish text and 2 (UTF-16BE) for the Russian text.

      I also modified the TextFmt routine in FmtDefault.pm to print the value of the parameter $sCode. It was undef for the Spanish text and UTF16-BE for the Russian text. So the routine just returns the Spanish text since $sCode is undef, but formats the Russian text (which gets mangled) as UTF16-BE.

      sub TextFmt($$;$) { my($oThis, $sTxt, $sCode) =@_; if((! defined($sCode)) || ($sCode eq '_native_')) { print STDERR "$sTxt/sCode " . (defined($sCode) ? "is _native_" + : "undefined") . " - returning text\n"; return $sTxt; }; # Handle utf8 strings in newer perls. if ($] >= 5.008) { require Encode; print STDERR "$sTxt/$sCode; returning text with UTF-16BE encod +ing\n"; return Encode::decode("UTF-16BE", $sTxt); } print STDERR "$sTxt/$sCode; formatting with pack/unpack\n"; return pack('U*', unpack('n*', $sTxt)); #return pack('C*', unpack('n*', $sTxt)); }
        The Encoding method returns 1 (8bit ASCII or single byte UTF-16) for the Spanish text

        I don't know where that description is coming from, but it suggests a serious misunderstanding of the terms being used.

        You can say "single-byte ASCII" (which is redundant, since ASCII by definition uses only 7 bits), but it's strange to say "8bit ASCII", because ASCII does not refer to values in the 0x80-0xFF range, and people usually speak of "8-bit characters" as being in contrast to ASCII (because 8-bit characters are the ones in the range 0x80-0xFF).

        Saying "single byte UTF16" is simply nonsensical. It's an oxymoron.

Re: Handling variety of languages/Unicode characters with Spreadsheet::ParseExcel
by graff (Chancellor) on Apr 09, 2010 at 01:33 UTC
    You might want to look at this script I posted a while back: xls2tsv

      Thanks for the link!

      I gave that a try and it properly handled the Spanish, French and German text.

      1- Search term = Fundacin

      1- Search term = Franais

      1- Search term = BESCHFTIGTEN

      It had a problem with Russian, Arabic, Hebrew, Chinese Simplified, Chinese Traditional, Korean and Japanese text.

      (Russian) 1 - S e a r c h t e r m = $ !# - should be ФОРСУНОК

      (Arabic) 1 - S e a r c h t e r m = 'D91 - should be العرب(

      (Hebrew)1 - S e a r c h t e r m =  - should be עברית

      (Chinese Simplified)1 - S e a r c h t e r m = DN - should be 资产

      (Chinese Traditional) 1 - S e a r c h t e r m = Œu" - should be 資產

      (Korean) 1 - S e a r c h t e r m = Dx - should be 아미노산

      (Japanese) 1 - S e a r c h t e r m = 00 - should be オレ

        It would seem, then, that the creation of those spreadsheet files involved using non-unicode encodings for the non-Latin-based scripts (which seems perverse, but oh well).

        If you have some sort of independent (and reliable) information (e.g. in another cell of the spreadsheet) to identify the language being used in a given cell, you could try storing the non-Latin-based cell contents (the raw byte sequences from those cells) in separate plain-text files sorted by language, then try Encode::Guess on the Asian language data, and just experiment with alternate single-byte code pages (cp1256 for Arabic, cp1251 for Russian, cp1255 for Hebrew, among others) until you find the right code page for each.

        Once you know which encoding is being used for each of the non-Latin, non-unicode sets in the spreadsheet, just use Encode::decode as appropriate to convert the data to utf8.

        I would also point out that your method of showing what you actually got is probably misleading -- some bytes in those goofy strings might not be displayable, and some might be getting "interpreted" by your display tool. It would be better to look at the hexadecimal values of the bytes; e.g. if one of those strings is in $_, you can do:

        { use bytes; print join(" ",map{unpack("H*",$_)} split//) }
        (NB: the "use bytes" is there to make sure that split treats $_ as "raw bytes", no matter what, so that you get to see what is really in those cells.)
Re: Handling variety of languages/Unicode characters with Spreadsheet::ParseExcel
by jmcnamara (Monsignor) on Apr 10, 2010 at 00:26 UTC

    I am the current maintainer of Spreadsheet::ParseExcel. Try the FmtJapan formatter in parse() as follows:

    #!/usr/bin/perl -w use strict; use Spreadsheet::ParseExcel; use Spreadsheet::ParseExcel::FmtJapan; my $parser = Spreadsheet::ParseExcel->new(); my $formatter = Spreadsheet::ParseExcel::FmtJapan->new(); my $workbook = $parser->parse('file.xls', $formatter); if ( !defined $workbook ) { die $parser->error(), ".\n"; } for my $worksheet ( $workbook->worksheets() ) { print "Worksheet name: ", $worksheet->get_name(), "\n\n"; my ( $row_min, $row_max ) = $worksheet->row_range(); my ( $col_min, $col_max ) = $worksheet->col_range(); for my $row ( $row_min .. $row_max ) { for my $col ( $col_min .. $col_max ) { my $cell = $worksheet->get_cell( $row, $col ); next unless $cell; # Do something with $cell->value() and remember to enc +ode # any output streams if required. } } } __END__

    Ignore that fact that it says FmtJapan, it is a general purpose Unicode formatter as well.

    --
    John.