in reply to Re^2: Handling variety of languages/Unicode characters with Spreadsheet::ParseExcel
in thread Handling variety of languages/Unicode characters with Spreadsheet::ParseExcel
If you have some sort of independent (and reliable) information (e.g. in another cell of the spreadsheet) to identify the language being used in a given cell, you could try storing the non-Latin-based cell contents (the raw byte sequences from those cells) in separate plain-text files sorted by language, then try Encode::Guess on the Asian language data, and just experiment with alternate single-byte code pages (cp1256 for Arabic, cp1251 for Russian, cp1255 for Hebrew, among others) until you find the right code page for each.
Once you know which encoding is being used for each of the non-Latin, non-unicode sets in the spreadsheet, just use Encode::decode as appropriate to convert the data to utf8.
I would also point out that your method of showing what you actually got is probably misleading -- some bytes in those goofy strings might not be displayable, and some might be getting "interpreted" by your display tool. It would be better to look at the hexadecimal values of the bytes; e.g. if one of those strings is in $_, you can do:
(NB: the "use bytes" is there to make sure that split treats $_ as "raw bytes", no matter what, so that you get to see what is really in those cells.){ use bytes; print join(" ",map{unpack("H*",$_)} split//) }
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^4: Handling variety of languages/Unicode characters with Spreadsheet::ParseExcel
by richb (Scribe) on Apr 09, 2010 at 14:40 UTC | |
by mjoinsd (Initiate) on Jul 14, 2010 at 21:58 UTC |