in reply to Re: Convert Windows-1252 Characters to Java Unicode Notation
in thread Convert Windows-1252 Characters to Java Unicode Notation

Thank you very much, Juerd. This worked brilliantly:

# Convert Windows-1252 characters into Java's Unicode notation... $md->{$column} =~ s{([\x80-\xFF])}{ sprintf "\\u%04x", ord decode('cp1252', $1) }eg;
Simple and elegant.

(By the way, the frequency of occurrence of non-US-ASCII characters in the data is very low in relation to the amount of text. So-called 8-bit characters are infrequent and usually occur in isolation.)

Jim

Replies are listed 'Best First'.
Re^3: Convert Windows-1252 Characters to Java Unicode Notation
by Juerd (Abbot) on Nov 25, 2007 at 23:05 UTC

    You may be better off with a hard coded translation table, for performance.

    my %w1252_to_java = map { chr($_) => sprintf("\\u%04x", decode "Windows-1252", chr) } 0x80 .. 0xff; ... $md->{$column} =~ s/([\x80-\xff])/$w1252_to_java{$1}/g;

Re^3: Convert Windows-1252 Characters to Java Unicode Notation
by bart (Canon) on Nov 26, 2007 at 11:55 UTC
    (By the way, the frequency of occurrence of non-US-ASCII characters in the data is very low in relation to the amount of text. So-called 8-bit characters are infrequent and usually occur in isolation.)
    Maybe in English, but not when your data is in French, for example. In French you can easily have one or two accented characters every other word.

    Ain't it typical again that English speaking people automatically assume that the whole world uses only English...

    Well, I'm assuming that now you're just talking about your own, personal case. Yes, in that case it's very likely that accented characters are very rare. Until you start getting an international audience, that is...

    BTW the difference between ISO-Latin-1 and Windows-1252 will most probably be most visible in the so-called "smart quotes", those curly quotes that bend a different way for opening and closing quotes, and the ditto curly apostrophe.

      Bart,

      I read your second paragraph and exclaimed to myself, "Slow down, cowboy!"

      Then I read your third paragraph and regained my composure.

      Curiously in retrospect, the sentence of mine that you quoted originally read: "By the way, the frequency of occurrence of non-US-ASCII characters in my data is very low..." I changed "my" to "the" before posting. I cannot explain why. Maybe I'm subconsciously averse to claiming ownership of others' data.

      The point of my parenthetical remark was just this: In the discrete data with which I'm working, there's a very low proportion of 8-bit characters vis-à-vis the total amount of text. So, for example, the kind of optimization Juerd suggested later really isn't an optimization at all in my specific case. I anticipated the likelihood someone might suggest just such an optimization as his and implied that it would be a false opimization in this instance.

      I attended the Internationalization & Unicode Conference 31 last month in San José. I rubbed elbows with likeminded folk from all over the world who share my interest in languages and software globalization. Like you, I'm sensitive to matters of language and culture bias in software and computing. If I didn't care, I wouldn't have had a reason to post this inquiry in the first place. I would have just let the handful of 8-bit characters become mojibake and called it a day.

      Jim

        I didn't mean to personally insult you. If you allow me to generalize, people from the USA (and other English speaking countries, I'm sure) tend to care very little about accented characters, going so far as to ignore the possibility. Sending mail as US-ASCII, for example.

        Personally, I'm very wary of requirements that could possibly change. Texts that eventually might contain accented characters, is a typical example of such a red flag.