Re: iso-8859-1 code converter
by Corion (Patriarch) on May 05, 2009 at 06:24 UTC
|
| [reply] |
Re: iso-8859-1 code converter
by rovf (Priest) on May 05, 2009 at 07:58 UTC
|
I had expected that such a conversion table could be downloaded from one of the
Unicode sites in the Web, but if it is too difficult to find, it can be produced
in your case without too much difficulty.
Since you want to map iso-8859-1, the only interesting characters for the conversion
table are those with an encoding between 140 (=128+32) and 255, so as a first step,
you write a simple program which produces a file of bytes with values 140, 141, ..., 255.
In a next step, you use a text editor which can convert to and from UTF8. Since you
are working with japanese characters, you likely have such an editor anyway. Otherwise
there are plenty of free ones for the usual operating systems. On Windows, for instance,
I use the Unicode Version of Michael Zacharov's EC Editor (http://www.econtrol.ru/.
I think jEdit (http://www.jedit.org/), which is available for Windows, Unix and MacOS,
would do as well. Using such an editor, you load (or paste) your byte string, and
save it as UTF8. Using a Hex-Editor, you can see the encoding of these characters.
--
Ronald Fischer <ynnor@mm.st>
| [reply] |
Re: iso-8859-1 code converter
by shmem (Chancellor) on May 05, 2009 at 09:51 UTC
|
You can still get at kanjiworld.com via the wayback machine, but alas, only a few pages are archieved. Still a complete EUC-JP table is on the internet.
That said, it is not clear to me what you are trying to do, since there is no equivalent in iso-8859-1 for EUC-JP (in bytes, two bytes are just two bytes). There's only a unicode equivalent, which is straight forward to get from the table you refer to. E.g. の as HTML entity is written as の converting the numeric part to hex you get the unicode point: printf "%x", 12398 gives 306e. The perl unicode representation as string would be "\x{306e}" which is stored internally as a sequence of three char values: 227, 129, 174:
use HTML::Entities;
use Devel::Peek;
$c = decode_entities("の");
Dump $c
__END__
SV = PV(0x91ecb00) at 0x91ebcdc
REFCNT = 1
FLAGS = (POK,pPOK,UTF8)
PV = 0x91ff608 "\343\201\256"\0 [UTF8 "\x{306e}"]
CUR = 3
LEN = 8
The EUC-JP encodng for の is the two byte sequence a4ce (ascii 164,206 in decimal), but how is that iso-8859-1? On my terminal that renders as the Euro-Sign+Capital-I-Circumflex. So what is the sought outcome? what are you actually trying to do?
Thinking about it, your "somewhat strange request" sounds like a XY question... | [reply] [d/l] [select] |
|
|
Cheers for the responses, guys.
Okay, I didn't know the iso-8859-1/HTML representation is actually the Unicode representation. The overall goal or whatever is this:
I've got a list of games I slurped off of Amazon.JP. I need their English equivalent names, but of course, want the computer to do the work for me. play-asia.com is a pretty good source. If you search for the Japanese name, the first hit is usually it, and the default display is it's English title (bingo). The caveat is that the URL uses this 5-digit representation for each string. What I did before when I needed to do something similar for a site who used a legit Japanese encoding, was simply examined each character, and got it's corresponding code (utf8 or euc) from a hash populated by one of the tables on the net, much like the one you linked.
So, the search link for "任天堂" (Nintendo) is
http://www.play-asia.com/paOS-19-71-6-49-en-15-%26%2320219%3B%26%2322825%3B%26%2322530%3B-43-6.html
They use %26%23 as a character breaker, FYI.
Your solution seems to be in the right direction, but I'm not quite sure how it comes full circle.
-GP
| [reply] |
|
|
| [reply] [d/l] |
|
|
|
|
|
|
So you have a string like this: "任天堂", and you have a template for submitting a query containing that sort of string, which is:
http://www.play-asia.com/paOS-19-71-6-49-en-15-{your_string_here}-43-6
+.html
(I hope I'm parsing that correctly, but I have to say it looks a bit implausible.)
And in order to plug that sort of string into that query template, you have to:
- break the string into separate characters;
- convert each character into its decimal numeric Unicode code point value
- embed each numeric character value between "" and ";"
- do the "uri encoding" that turns "&", "#" and ";" into "%26", "%23" and "%3B", respectively
So, where's the problem? Have you tried something like this?
my $url = http://www.play-asia.com/paOS-19-71-6-49-en-15-XXX-43-6.html
my $string = "..."; # wherever you get your Unicode Asian string from
$string =~ s/([^[:ascii:]])/"%26%23".ord($1)."%3B"/eg;
$url =~ s/XXX/$string/;
| [reply] [d/l] [select] |
|
|
|
|
|
|
|
|
|
|
|
|
Re: iso-8859-1 code converter
by Anonymous Monk on May 05, 2009 at 06:35 UTC
|
but actually give me the corresponding code for that character in iso-8859-1.
Here are some ideas
use Encode;
my $ord = ord encode("iso-8859-1",decode("EUC", $bytes));
Unicode::UCD, Unicode::Transform | [reply] [d/l] |
|
|
use Encode;
my $ord = ord decode("EUC", $bytes);
$ord = undef if $ord > 255;
But neither implementation is likely to be useful (to anyone in general or to the OP specifically). | [reply] [d/l] |