this is indeed probably not the best place to discuss that
We could move to email if there is more to say? (But how to exchange emailids with Anonymonk?)
and yet it's the best general-purpose encoding today
That's a bit like saying Kim Jong-un is the best leader in NK :)
For the web -- with what are effectively signatures embedded in the Accept-Charset header -- it works well enough, but the moment you put it into a file there is no provision for identifying the particular encoding, and you're back to guessing.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |
абвгд
Yeah, perlmonks uses a one byte encoding... Windows-1252, I believe.
Now, there could be a self-synchronizing, easily recognizable, fixed-length encoding, but it wouldn't be backwards compatible with 7-bit ASCII. So what did you expect? If it's not backwards, it's not compatible... | [reply] [d/l] |
It's easily recognizable. It's just extremely unlikely that you'll get a (not super short) string that just happens to look like valid UTF-8.
And if that was the only unicode encoding, it might be a recommendation; but there are a multitude of "unicode" encodings, the rest of which don't share that property.
Just remove a couple of random bytes from a UTF-8 string, and you'll lose a couple of characters. All others are still there, completely undamaged.
That's a bit like saying that a fast poison is better than a slow poison because you suffer less. Basically making a feature of an incidental property that has no value in the real world.
Bytes don't randomly disappear from the middle of files; and streams have low-level error detection/resend to deal with such events. The ability to re-synch a corrupted stream is of little value when it is such a rare event; and entirely not worth the costs of achieving it.
Remove a couple of bytes in the middle of a UTF-32 string, and the rest of the string IS binary garbage.
I'm not even sure that is true -- just move to the end and step backwards -- but even if it was, it is again of little relevance because bytes don't randomly disappear from files, and they will be detected and corrected by the transport protocols in streams.
One byte encodings are just not general purpose... Since some users want to use all kinds of characters in their documents.
I've never suggesting that we should return to 1-byte encodings; but you have to recognise that variable length encoding undoes 50 years of research into search/sorting/comparison algorithms for no real benefit.
but it wouldn't be backwards compatible with 7-bit ASCII.
Recognise that the vast majority of computer systems and users were encoding files in localised ways (8-bit chars/code pages) for many years before the misbegotten birth of unicode and its forerunners; and utf-8 is not backwards compatible with any of that huge mountain of legacy data. Consigning all that legacy data to the dustbin as the product of "devs and users who created garbage" is small-minded bigotry.
Very few people (basically, only the US and IETF) went straight from 7-bit to unicode. There are huge amounts of research and data that were produced using JIS/Kanji, Cyrillic, Hebrew, Arabic et al, and unicode is not compatible with any of it.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |