I'm a loss how to resolve this. I have a file full of messages grabbed from an NNTP server. Some of the messages use characters > 127. These don't come out in a way I can easily understand.
For instance, the name "Mlle. Anais" (where the i has a diaresis on top) is in the file as "Mlle. =?iso-8859-1?Q?Ana=EFs?=". I believe I get this - it's telling me that the Anais part consists of "Ana", then the Latin-1 Supplement character represented as 0xef, then "s".
I'd like to save these captured messages in a format compatible with reading in a web browser. I know I could do a brute-force substition where I find anything starting with '=?iso-8859-1' and smash on it with some perl code, but I suspect there's a clean, elegant, etc means of doing this. Suggestions? Ideas on where to RTFM? etc?</p?
In reply to Parsing (Unicode?) characters from Usenet messages by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |