Each of these schemes requires some amount of knowledge that may not be determinable by examining just the data itself.
Just to make sure. I didn't imply anywhere, that the developer should determine the encoding by examining the data. Personally I believe that guessing the encoding is a sin. It should be done only if there's no other choice. It is much better to force the user to provide the information about the encoding if it is not known already.
| [reply] |
... Unicode is such a system.
This is just so wrong. For one, Unicode is not an encoding. Rather, UTF-8, UTF-16 etc. are encodings. And a rather common one of them - UTF-8 - is variable-width, i.e. not same number of bytes per character...
| [reply] |
For one, Unicode is not an encoding. Rather, UTF-8, UTF-16 etc. are encodings. And a rather common one of them — UTF-8 — is variable-width, i.e. not same number of bytes per character.
Both UTF‑8 and also UTF‑16 as well are variable‐width encodings. The essential difference is the size of the code units. There is an infinitude of Java and Windows code (but not necessarily both) out there that screws this up, thinking that UTF‑16 is UCS‑2. It very much is not so. Plus UCS‑2 isn’t even a valid Unicode encoding in the first place. UTF‑8, UTF‑16, and UTF‑32 are, and of those, only the last uses fixed‐width code units. UTF‑16 is problematic and annoying in several ways that do not affect either UTF‑8 or UTF‑32, but that doesn’t make it fixed width.
So the same statement as you’ve made about UTF‑8 applies equally well, mutatis mutandis, to UTF‑16: “UTF‑16 is also a variable‐width encoding, i.e. not the same number of 16‑bit code units per character.” It would be very, very good idea to remain ever conscious of this, given how much harm has been done by negligent programmers who have not done so.
| [reply] |
| [reply] |
| |