in reply to Convert strings with unknown encodings to html

So, you're saying that you have this one database (just one table? multiple tables?), and when you query to get strings from it (from just one column? from multiple columns?), you sometimes get utf8 strings, and sometimes get cp1252 strings, and sometimes get character entity references like ® (and sometimes numeric references like þ or þ?). Can all the variation occur within a single column, or is it different depending on which column holds the string?

And have you decided what format you want to normalize to? If so, what is that? (If not, why not?)

If there's really no way to predict what sort of encoding is coming back from the database for a given query, then you really do have one totally fubar'd database. What a shame.

I gather you've done some diagnosis of database contents, and have some idea about the scope of variation. Is stuff still being added to it? If so, does it continue to be as messy and uncontrolled as the stuff that's already there?

Don't feel like you have to tell us the answers to all those questions - those are just the main things you have to think about because they affect what kind(s) of solution(s) are likely to be useful.

Let's suppose you want to your "normalized format" to be just utf8 characters (no entities like & ® &#xf8ff etc.)

In terms of checking what needs to be done to a given string in order to get to that normalized form, there are a few handy guidelines:

Once the string is purely utf8 characters with no entity references, it should be pretty easy to convert that, if necessary, to any other form that you may need for a web display. Good luck.

Replies are listed 'Best First'.
Re^2: Convert strings with unknown encodings to html
by Pascal666 (Scribe) on Jul 01, 2015 at 02:18 UTC

    I mentioned the database primarily so that I would not get suggestions that involve open's encoding option. All of the examples above are from the same column. It may be possible to have multiple formats in the same string. Even if that is not the case today, I would like a solution that supports it in the future.

    I need to normalize to HTML. Whatever intermediate encoding gets the job done is fine by me. Passing html to encode_entities causes it to be double encoded, which does not display correctly on a web page, so first thing I do is run the input through decode_entities. I suppose it is possible that the input will already be double encoded html. I hadn't thought about running decode_entities multiple times before. Good idea.

    The database is only a few months old, is added to constantly, and is expected to be added to indefinitely. It is extremely unlikely the provider will do anything to improve their input sanitization. Had they the choice, they would not provide me the data in any format. They certainly aren't going to make it any easier to parse.

    I do try not to post n run. You took the time to read my message and reply, the least I can do is satisfy your curiousity.

    Thank you for the detailed analysis and suggestions. Had Encoding::FixLatin not been suggested first, I probably would have ended up using your "guidelines" as an outline for a solution.

      I hadn't heard of Encoding::FixLatin myself prior to seeing this thread, and I'm glad to have learned about it.