And if that was the only unicode encoding, it might be a recommendation; but there are a multitude of "unicode" encodings, the rest of which don't share that property.
Use what works.
That's a bit like saying that a fast poison is better than a slow poison because you suffer less. Basically making a feature of an incidental property that has no value in the real world.
Well, maybe disappear doesn't happen that often... What if they appear instead?

$ touch $'абс\xFF普通话'

$ ls -l

Should software deal with it? What should it do? Let's see
$ echo $'aaa\xFFaaa' | xclip -i # copy to clipboard
(middle click in the textarea window) aaa�aaa

Looks like Chromium does the right thing...

The world is actually full of garbage strings :)
I'm not even sure that is true -- just move to the end and step backwards
Well, basically, there is a ton of 'false positives'.

$ perl -MEncode -mutf8 -e 'printf "%vx\n", Encode::encode( "UTF-16", "ジ" )'

fe.ff.0.e3.0.82.0.b8
$ $ perl -MEncode -e 'printf "%vx\n", Encode::decode( "UTF-16", "\xFE\ +xFF\x00\x82\x00\xB8")' 82.b8

A perfectly good codepoint, unfortunately it's Chinese instead of Japanese...

(it's so painful to make perlmonks display what I want to display... does anyone have some tips? I use <tt> and <p>, it's a pain)
I've never suggesting that we should return to 1-byte encodings; but you have to recognise that variable length encoding undoes 50 years of research into search/sorting/comparison algorithms for no real benefit.
As I said, I see no real benefit in variable length now. Maybe it made some sense when dinosaurs roamed the Earth and modems were 2400 bps.
Very few people (basically, only the US and IETF) went straight from 7-bit to unicode. There are huge amounts of research and data that were produced using JIS/Kanji, Cyrillic, Hebrew, Arabic et al, and unicode is not compatible with any of it.
And any of it is not compatible with each other... so it's not general purpose. Is that unreasonable to expect that a typical computer user (not programmer) in 2015 would be able to use Kanji, Cyrillic, Hebrew, Arabic etc in a single document? (and without pain?) That seems a very reasonable feature request...

No, I don't think it was ever really supposed to make programmers' lives easier. Oh well, c'est la vie.


In reply to Re^10: Encoding Problem - UTF-8 (I think) by Anonymous Monk
in thread Encoding Problem - UTF-8 (I think) by Melly

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.