in reply to How to handle encodings?

- How do you handle this?

I keep everything in UTF-8, since it's something that's universal and that's understood by nearly every program

- Is it a good idea to convert incoming data to Perls internal format when processing it, and do vice versa when printing/storing processed data?

Yes, it's the way to go IMHO. You can use IO layers and Encode to do it for you.

- Does the format of the file containing the Perl code itself matter?

If there are string constants in that file, and you concatenate them to the data, it does matter. So you should decode these string constants (or keep the perl files in UTF-8, and use utf8; which does the decoding for you).

I've looked att the Encoding::Guess module, is this an option to decide the format of the incoming data?

No. Guessing encoding is not reliable, and you should avoid it by all means. Make sure that all your interfaces have a way to specify the encoding.

My concluding question: What is the best way to deal with different character sets in a system?

Keep all data internally in a consistent format, and recode at the boundary between what you consider "internal" and "external". The internal encoding should be a Unicode encoding (like UTF-8, UTF-16{l,b}e) so that you won't have any information loss during recoding. Unicode aims for round-trip conversions between non-Unicode charsets and Unicode, and for all common encodings it pretty much succeeds.

Replies are listed 'Best First'.
Re^2: How to handle encodings?
by bibliophile (Prior) on Mar 06, 2009 at 21:10 UTC
      Definitively worth a read. It inspired me to write this article with a similar intention, but more focused on Perl programming.
Re^2: How to handle encodings?
by graff (Chancellor) on Mar 07, 2009 at 17:26 UTC
    I've looked att the Encoding::Guess module, is this an option to decide the format of the incoming data?

    No. Guessing encoding is not reliable, and you should avoid it by all means. Make sure that all your interfaces have a way to specify the encoding.

    I wouldn't be so harsh on Encode::Guess. It definitely can be useful when applied correctly to the right problems, and I think its man page does an okay job of saying what its strengths and weaknesses are.

    I agree that using it as a "do-all" for every multi-encoding task would be wrong; ideally, all your inputs will provide some sort of declarative or unambiguous evidence about the encoding being used, but for inputs that don't provide that, you may need all the help you can get (doing "offline" research/investigation to understand the data) in order to figure out what encoding the data is using, and Encode::Guess can help in such cases.

    Once you understand your data well enough, and you understand how Encode::Guess handles it, you may actually find it worthwhile to use the module in a production pipeline, to route data according to what the module can tell you about it (in the absence of any other information) -- but doing so without thorough testing would be a mistake.

      A typical place where Encode::Guess falls down (through no fault of its own) is in differentiating one variant of iso-8859 from another.

      Who's to say if chr(250) is "Č" (ISO-8859-2) or "Θ" (ISO-8859-7)?

      Without prior knowledge, you're up the creek without a paddle. So I agree wholeheartedly with Moritz's suggestion of converting everything to UTF8, while you still know what encoding it is in.

      (graff - I know you're too wise a monk to have been suggesting otherwise, but I wanted to provide a simple example of just how limited Encode::Guess can be.)

      Clint

        Who's to say if chr(250) is "Č" (ISO-8859-2) or "Θ" (ISO-8859-7)?

        Any moderately experienced human eye, given a paragraph of context. Have you ever played the game of pulling down the Character Encoding menu in your Web Browser and trying the same page in different encodings?

        This is not a dissent from moritz's and clinton's advice. By all means heed any available declarations.

        But in the absence of a declaration, the right encoding is humanly guessable, even when the alternatives are different members of the ISO-8859 series. And if it's humanly guessable, then a CPAN module should in principle be able to do it too.

        I don't know what algorithm Encode::Guess uses. My own algorithm (untested) would be

        • submit a phrase to Google as a search term;
        • accept Google's proposed spelling correction.