It's easily recognizable. It's just extremely unlikely that you'll get a (not super short) string that just happens to look like valid UTF-8.

And if that was the only unicode encoding, it might be a recommendation; but there are a multitude of "unicode" encodings, the rest of which don't share that property.

Just remove a couple of random bytes from a UTF-8 string, and you'll lose a couple of characters. All others are still there, completely undamaged.

That's a bit like saying that a fast poison is better than a slow poison because you suffer less. Basically making a feature of an incidental property that has no value in the real world.

Bytes don't randomly disappear from the middle of files; and streams have low-level error detection/resend to deal with such events. The ability to re-synch a corrupted stream is of little value when it is such a rare event; and entirely not worth the costs of achieving it.

Remove a couple of bytes in the middle of a UTF-32 string, and the rest of the string IS binary garbage.

I'm not even sure that is true -- just move to the end and step backwards -- but even if it was, it is again of little relevance because bytes don't randomly disappear from files, and they will be detected and corrected by the transport protocols in streams.

One byte encodings are just not general purpose... Since some users want to use all kinds of characters in their documents.

I've never suggesting that we should return to 1-byte encodings; but you have to recognise that variable length encoding undoes 50 years of research into search/sorting/comparison algorithms for no real benefit.

but it wouldn't be backwards compatible with 7-bit ASCII.

Recognise that the vast majority of computer systems and users were encoding files in localised ways (8-bit chars/code pages) for many years before the misbegotten birth of unicode and its forerunners; and utf-8 is not backwards compatible with any of that huge mountain of legacy data. Consigning all that legacy data to the dustbin as the product of "devs and users who created garbage" is small-minded bigotry.

Very few people (basically, only the US and IETF) went straight from 7-bit to unicode. There are huge amounts of research and data that were produced using JIS/Kanji, Cyrillic, Hebrew, Arabic et al, and unicode is not compatible with any of it.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
In the absence of evidence, opinion is indistinguishable from prejudice.

In reply to Re^9: Encoding Problem - UTF-8 (I think) by BrowserUk
in thread Encoding Problem - UTF-8 (I think) by Melly

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.