in reply to Re: Handling variety of languages/Unicode characters with Spreadsheet::ParseExcel
in thread Handling variety of languages/Unicode characters with Spreadsheet::ParseExcel

Thanks for the link!

I gave that a try and it properly handled the Spanish, French and German text.

1- Search term = Fundacin

1- Search term = Franais

1- Search term = BESCHFTIGTEN

It had a problem with Russian, Arabic, Hebrew, Chinese Simplified, Chinese Traditional, Korean and Japanese text.

(Russian) 1 - S e a r c h t e r m = $ !# - should be ФОРСУНОК

(Arabic) 1 - S e a r c h t e r m = 'D91 - should be العرب(

(Hebrew)1 - S e a r c h t e r m =  - should be עברית

(Chinese Simplified)1 - S e a r c h t e r m = DN - should be 资产

(Chinese Traditional) 1 - S e a r c h t e r m = Œu" - should be 資產

(Korean) 1 - S e a r c h t e r m = Dx - should be 아미노산

(Japanese) 1 - S e a r c h t e r m = 00 - should be オレ

  • Comment on Re^2: Handling variety of languages/Unicode characters with Spreadsheet::ParseExcel

Replies are listed 'Best First'.
Re^3: Handling variety of languages/Unicode characters with Spreadsheet::ParseExcel
by graff (Chancellor) on Apr 09, 2010 at 13:17 UTC
    It would seem, then, that the creation of those spreadsheet files involved using non-unicode encodings for the non-Latin-based scripts (which seems perverse, but oh well).

    If you have some sort of independent (and reliable) information (e.g. in another cell of the spreadsheet) to identify the language being used in a given cell, you could try storing the non-Latin-based cell contents (the raw byte sequences from those cells) in separate plain-text files sorted by language, then try Encode::Guess on the Asian language data, and just experiment with alternate single-byte code pages (cp1256 for Arabic, cp1251 for Russian, cp1255 for Hebrew, among others) until you find the right code page for each.

    Once you know which encoding is being used for each of the non-Latin, non-unicode sets in the spreadsheet, just use Encode::decode as appropriate to convert the data to utf8.

    I would also point out that your method of showing what you actually got is probably misleading -- some bytes in those goofy strings might not be displayable, and some might be getting "interpreted" by your display tool. It would be better to look at the hexadecimal values of the bytes; e.g. if one of those strings is in $_, you can do:

    { use bytes; print join(" ",map{unpack("H*",$_)} split//) }
    (NB: the "use bytes" is there to make sure that split treats $_ as "raw bytes", no matter what, so that you get to see what is really in those cells.)

      Thanks for your reply. I think your comments summarize the situation/problem nicely.

      Frankly, the source spreadsheet is a mess! :)

      I like your approach of segregating the non-Latin data separately and using Encode::Guess. I will experiment with that and see if I can make things better.

      In the meanwhile, I'll have to tell the folks creating these spreadsheets to not do what was done here.

      Thanks again for your help!

        Did you find a solution yet? I have a spreadsheet that contains English and Chinese which I am trying to import into MySQL as appropriate. It also has a column of MS specialized characters (= < >) which is also driving me nuts. I'm getting nowhere with all the suggestions on this posting. I'm not exactly a noob with Perl, but have never worked with internationalization and encoding.