in reply to Re^7: best sort
in thread best sort

BIG, BOLD, FLIP, AND ENTIRELY IRRELEVANT QUESTIONS HERE!

... you plan to stay in your humble hamlet the rest of your life...

I worked all over Europe and Scandanavia, including over 4 years in one European country with a multi-national programming team. That's one of the reasons that I advocate minimal commenting. If you need to translate comments, with all their typically informal language usages, in order to understand the code you are working on, it is a nonsense that slows work to a crawl.

That's one of several reasons why a lingua-franca is even more important when working across national borders than it is when working isolated within them. What language is the lingua-franca is irrelevant, it just happens, by virtue of history, to be English.

I was jointly responsible for adding bi-directional language support -- which meant working with Hebrew, Arabic and Farsi amongst others -- to OS/2 back in the day. I also took the lead in delivering magazine front install CDs for OS/2 Warp in 13 different languages.

In both cases, the native language text was treated internally as opaque binary indexed by language and message number. It would have been impossible to get sufficiently versed in all the required languages for either project, so pragmatism reigned as it should.

Translations to target languages were performed by native-language/English bilinguals and verified by English/native language bilinguals. And the process iterated until they agreed. The code was written by a variety of nationals -- me, an Israeli and an Egyptian for the former; and a whole bunch of nationalities for the latter -- in the English-based computer language of choice (C), with comments in English where necessary. Any other approach would have been silly.

I'm a mono-plus-several-less-than-halfs-glot, but my linguistic skills are irrelevant unless you are truly advocating that every programmer should learn every human language on the planet? If not, in fact, even if you were, the only sensible solution is to have a lingua-franca so that each programmer only needs to learn their native language + that lingua-franca, not all 7000 natural languages, nor even the 200 or so in common usage around the world.

Whatever language is the lingua-franca one bunch of programmers will have an advantage. As is, that means I haven't had to become properly bilingual. Had it been French or German or Italian or Spanish, I probably would have still had a successful career. How would you do if it suddenly changed to Mandarin Chinese or Urdu or one of the Cyrillic-based languages?

you’re an English monoglot who refuses to spell imported words or even people’s names they wish them spelled...

I never suggested for one moment that text should only be ascii. Only that the Unicode mechanism whereby I can receive a file of "text" and have absolutely no way of determining which of the many Unicode encodings it contains -- nor even if it actually contains any of them -- is a nonsense.

Unicode as it stands is multiple fixed and variable length binary formats without no identifiers or headers. As I said, imagine taking a directory of mixed format image files, striping out the headers and then writing a program to work out how to display them all. That's a direct analogy to the situation today with "unicode". It is farcical!

In other words, put up or shut up.

When you address my question about how you are going to solve the problem of sorting names written in Latin, Cyrillic, Arabic, Farsi, Thai, Chinese, Japanese, Urdo, Gaelic Ge'ez, Osmanya, Tifinagh ... et al. I'll consider it.

Because until then, you've only partially -- the latin part -- solved the real problem. And that part was "solved" decades ago.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^9: best sort
by tchrist (Pilgrim) on Aug 18, 2011 at 00:08 UTC
    You seem to have managed to confuse Unicode with serialization formats. That’s a shame.

    As for knowing what sort of data content you have, that has never been Unicode’s job. That is something one must relegate to a higher-level protocol. It’s just like with receiving a file over the web. If you expect to know what to do with the file, then you need various bits of metadata to know how to handle it. If someone sends you a file but doesn’t tell you what’s in it, that’s a personal problem. It’s not a Unicode problem at all. You have a social problem, which is something else altogether. You need a better higher-level protocol is all.

    That said, because Unicode was both exceedingly careful and also reasonably clever about how it defined its approved variable‐width serialization schemes, I have no trouble in the world at all knowing which of the three I have:

    $ perl -CS -S unichars Singleton > sample-one $ iconv -f UTF-8 -t UTF-16 < sample-one > sample-two $ iconv -f UTF-8 -t UTF-32 < sample-one > sample-three $ file sample-{one,two,three} sample-one: UTF-8 Unicode text sample-two: Little-endian UTF-16 Unicode text sample-three: Unicode text, UTF-32, little-endian

    There aren’t many different flavors of Unicode as you frequently allege. There can be only one. That’s what the “uni” part is about. That’s why things like Perl and XML and HTML are always all Unicode, all the time: because it always means the same thing. It makes no matter whether you say chr(233) in Perl, &#233; in HTML, or &#xe9; in XML. Those are always the same character, because the Unicode mapping of assigned code points to characters is always the same and guaranteed never to change. And that character is always LATIN SMALL LETTER E WITH ACUTE. Similarly, something like HTML’s &eacute; always maps to Unicode code point 233. It’s not like the same character is a code point 142 on a Mac and code point 221 on NextStep. That would be wrong. That’s why modern systems like Perl and HTML and XML are 100% Unicode: so that assigned code points always mean the same character. There is only one flavor of Unicode, or it wouldn’t be Unicode.

    I suppose you might stump for Unicode 6.0 being a different flavor from Unicode 5.0, but that seems to be putting too fine a point on it. In any event, the strong stability guarantees Unicode avoid train wrecks in that arena.

    Which is quite all the time I have for a belligerent anonymous coward, and then some.

      you need various bits of metadata to know how to handle it.

      Ah. So when a Unicode file is sent somewhere, it needs to be accompanied by another file containing metadata to identify which "unicode" the first contains. But what encoding is the metadata in? Now you need another file ...

      Oh yeah! That's great design.

      a belligerent anonymous coward,

      Translations:

      • belligerent: someone who doesn't immediately agree with the VIP tchrist.
      • anonymous coward: someone you can't intimidate when you run out of good arguments.

      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        Ah. So when a Unicode file is sent somewhere, it needs to be accompanied by another file containing metadata to identify which "unicode" the first contains. But what encoding is the metadata in? Now you need another file ...
        BZZZT!

        (And thank you for playing.)

        Precisely what part of “There is no such thing as a Unicode file.” was it that you didn’t understand?