was replaced by the corresponding unicode (in fact the utf-8) character.
Your choices are to tell XML::LibXSLT not to replace numerical entities (I have no idea if this is possible) or to post-process the result (you might need to convert it to ISO-8859-1 beforehand) using for exemple HTML::Entities.
You could also decide to just go the unicode way and leave the character as-is, changing the encoding of the document to utf-8 (in fact everything you get back from the XSLT transformation is probably utf-8 already, your problem is not just ).
| [reply] |
Normally to display whitespace in HTML via XSL, one would use . But when using libXSLT, was replaced by Â.
What it did was convert   to unicode UTF8. You options are to specify the charset of latin-1 or de-utf-8 your output.
I have been having "fun" wih this issue a lot recently - see XML Simple Charset Q?, XML::Parser and &entity; and (for the de-UTF-8 trick) Re: Re: XML::Parser and &entity; and it may help you to read those nodes and the replies especially the ones by mirod.
Dingus Enter any 47-digit prime number to continue. | [reply] |
I don't understand the previous two replies, since character 160 has the same meaning in Latin-1 as it does in Unicode. Namely, none. That is a C1 control code with no defined meaning.
In the Windows 1252 character set, the C1 zone has extra characters in it, making it a super-set of Latin 1. I guess 160 (0xA0) is the non-breaking space character.
So, the code appears to be converting from a character set where 0xa0 means A-hat, which is 194 in Latin-1.
Hmm, perhaps the resulting UTF-8 is being treated as two characters later. A0 will be encoded as C3 80, which if re-displayed as Windows 1252 is A-tilde then the wanted non-breaking space. If your note was a typo (tilde instead of angle hat), that's what's happening. | [reply] |
There's a subtle distinction going on here. Character 160 (0xA0) is a non-breaking space in both Latin-1 and Unicode. However the binary representation of a Unicode character depends on what encoding you use:
- if you encode Unicode with UTF-16 then every character will be represented as two bytes and character 160 will be 0x00 0xA0 (or 0xA0 0x00 depending on endian-ness)
- UTF-8 on the other hand uses a variable number of bytes (from 1 to 7) to represent unicode characters. All characters from 0-127 are a single byte (so plain ASCII files are also UTF-8) and all characters from 128-65535 (and beyond?) are two or more bytes.
Note also, that the Windows 1252 character set is not a super-set of Latin-1. Latin-1 is an 8 bit character set in which every character position has been assigned. Microsoft decided the the control characters in the 0x80-0x9F range were not useful so in CP1252 they removed those character assignments and replaced them with other characters (such as 'smart quotes').
I think we'd all agree that 7-bit ASCII is insufficient for general use outside of the US (in fact it can't even represent the US 'cent' symbol: ¢). The 8-bit Latin-1 character set is a 'point solution' for western european nations but it is no longer sufficient for them either (eg: no Euro symbol: €). The clear way forward is widespread adoption of Unicode - it's here now and it works already. Clinging to and tweaking region-specific 8 bit encodings is a dead-end strategy.
Sorry if this sounds like a rant - it's not intended to be personal. People just seem to waste a lot of time trying to translate perfectly good Unicode characters back into 'legacy' encodings instead of just using them as they are.
| [reply] |