in reply to Re^2: JSON::XS (and JSON::PP) appear to generate invalid UTF-8 for character in range 127 to 255
in thread JSON::XS (and JSON::PP) appear to generate invalid UTF-8 for character in range 127 to 255

But what would be the cure?
  1. Stop pretending that unicode is 'forwards compatible' from ASCII.

    The least useful property of unicode is that a trivial subset of it can appear to be 'simple text'.

  2. Stop pretending that unicode isn't a binary format.

    Every other binary format in common use, self-identifies through the use of 'signatures'. Eg. "GIF87a" & "GIF89a".

  3. Recognise that unicode isn't a single format, but many formats all lumped together in a confused and confusing mess.

    Some parts have several names, some of which are deprecated. Other associated terms have meant, and in some cases still do mean, two or more different things.

  4. Recognise that there is no need and no real benefit to the "clever" variable length encoding used by some of the formats.

    It creates far more problems than it fixes; and is the archetypal 'premature optimisation' that has long since outlived its benefit or purpose.

  5. Keep the good stuff -- the identification and standardisation of glyphs, graphemes and code points -- and rationalise the formats to a single, fixed-width, self-identifying format.

    Just imagine how much simpler, safer, and more efficient it would be if you could read the first few bytes of a file and *know* what it contains.

    Imagine how much more efficient it would be if to read the 10 characters starting at the 1073741823th character of a file, you simply did (say):

    seek FH, 1073741823 * 3 + SIG_SIZE, 0; read( FH, $in, 10 * 3 );

    Instead of having to a) guess the encoding; b) read all the bytes from the beginning counting characters as you go.

    Imagine all the other examples of stupid guesswork and inefficiency that I could have used.

    Imagine not having to deal with any of them.

Imagine that programmers said "enough is enough"; give us a simple, single, sane, self-describing format for encoding the world's data.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re^3: JSON::XS (and JSON::PP) appear to generate invalid UTF-8 for character in range 127 to 255
  • Download Code

Replies are listed 'Best First'.
Re^4: JSON::XS (and JSON::PP) appear to generate invalid UTF-8 for character in range 127 to 255
by oiskuu (Hermit) on Dec 07, 2014 at 16:45 UTC

    You've confounded unicode with an encoding scheme (UTF-8). Some other encodings, say UCS-2, allow you to seek and read as in above example.

    ASCII is a binary encoding, too. Being the simplest and a common default, there's rarely any problem with it.

    Unicode is trouble, that's true. Its implementation almost invariably brings layers of abstraction, lasagna code, new levels of slow, new glitches, and sometimes, design changes that require extensive refactoring.

    As an example, take the notion of double-width glyphs. Very useful, to be sure, but also quite disruptive. Fixed-width terminal is no more; cell addressing is no longer character addressing; text area layout re-flows as you edit it. Why stop there, in the limbo between character-cell and full GUI, why not assign a point width to every char?

      You've confounded unicode with an encoding scheme (UTF-8).

      No. I haven't. I didn't mention any specific encoding, and I deliberately did not capitalise unicode.

      You've erected a strawman.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
      .
Re^4: JSON::XS (and JSON::PP) appear to generate invalid UTF-8 for character in range 127 to 255
by karlgoethebier (Abbot) on Dec 07, 2014 at 16:06 UTC

    Thank you BrowserUk.

    Best regards, Karl

    «The Crux of the Biscuit is the Apostrophe»

Re^4: JSON::XS (and JSON::PP) appear to generate invalid UTF-8 for character in range 127 to 255
by ikegami (Patriarch) on Dec 10, 2014 at 07:46 UTC

    The least useful property of unicode is that a trivial subset of it can appear to be 'simple text'.

    I fully agree. It fails hard instead of failing safe.

    Perl could mitigate that problem by keeping track of whether a string is decoded or not.

    Recognise that unicode isn't a single format, but many formats all lumped together in a confused and confusing mess.

    I don't follow. Who thinks UTF-8 and UTF-16le are the same format?

    rationalise the formats to a single, fixed-width, self-identifying format.

    Not sure "self-identifying" makes sense. length($a) + length($b) == length($a . $b) is a nice property. It's possible to cause hard failures on misuse without self-identification.

      Who thinks UTF-8 and UTF-16le are the same format?

      I'm pretty sure that I didn't say that any particular individual or group conflated those two, and any particular pairing of encoding schemes.

      But, the scope for confusion is designed right into the standard:

      Encoding Scheme Versus Encoding Form. Note that some of the Unicode encoding schemes have the same labels as the three Unicode encoding forms. This could cause confusion, so it is important to keep the context clear when using these terms: character encoding forms refer to integral data units in memory or in APIs, and byte order is irrelevant; character encoding schemes refer to byte-serialized data, as for streaming I/O or in file storage, and byte order must be specified or determinable.

      If you've never had this conversation with a prospective employer/user, you're one of the few lucky guys working today:

      "And the data will be supplied in Unicode files." -- "Which encoding?" -- "Que?" -- "The Unicode Standard currently defines 7 separate encoding schemes; and there are half a dozen or more now obsoleted by still commonplace other encodings that are routinely referred to as 'unicode'. Which encoding do want the program to accept?" -- "The 'normal' one of course." -- "There really isn't any such thing as a normal one. Each organisation tends to standardise on one or two of them; there is no consensus across organisations." -- "Hm. We'll have to accept them all then won't we." -- "But how will we know which one is contained in any particular file?" -- "I don't know. You're the programmers, that's your problem."

      There are heuristics, but they require reading the entire file to make their guess. Which is fine if your files are a few 10s of kb, but when you routinely deal with files in the 10s and 100s of GB; its just plain broken.

      It is also completely unnecessary. Network comms has worked perfectly fine for decades by specifying that comms should be done in network byte order.

      The variable-length encoding schemes are legacy left-overs from the 90's when memory was measured in kb and disks in MB. A space optimisation that is way past its sell-by date.

      And the only sane, fixed-length forms, UTF-32, is overkill for a standard that has a prespecified limit of 1,114,112 code points.

      A single UTF-24 fixed-length format has a 15 times headroom; and would speed up just about every operation whether in memory or on disk.

      Not sure "self-identifying" makes sense. length($a) + length($b) == length($a . $b) is a nice property.

      Sorry, but that a crock. Every other widely used binary file format (image/sound/video files; wordprocessor/spreadsheet/database files; CAD/CAM files; compresses/zipped/coagulated files; etc.), use signatures. Not having them to avoid:

      length($a) + length($b) - SIG_SIZE == length( SIG . substr( $a, SIG_SI +ZE ) . substr( $b, SIG_SIZE ) )

      is ridiculous. Like a supermarket chain failing to label their cans of food, in order to save the cost of the paper and glue.

      If the files were self-identifying, commands like cp and copy that are used to concatenate files, could safely determine the contents of those files and do the concatenation correctly. As is, concatenating two 'unicode' files is impossible unless YOU (the operator) KNOW (and have verified) that they both contain the same encoding.

      The lack of signatures just creates problems at every stage of the life of data. As does variable-length encoding.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        Sorry, but that a crock. Every other widely used binary file format

        I don't have a problem with file formats having a signature. You hadn't mention you were restricting yourself to files in the passage on which I commented.

      Or you could standardize the internal representation. A string is a sequence of code points. Storing the sequence length could be handy when dealing predominantly with string objects. Then the following cases arise:

      • (nbytes == 0 && ncodepts == 0) trivial case/empty/false
      • (nbytes > 0 && ncodepts == 0) binary blob
      • (nbytes > 0 && ncodepts == nbytes) with UTF-8 internal rep, this means string is plain ASCII
      • (nbytes > 0 && ncodepts < nbytes) generic unicode string

      Extended 8-bit charsets (ISO8859) suffer with UTF-8 internal representation, unless you hack the (ncodepts==nbytes) to indicate native format...

      More interesting is the interaction between objects. Considering a blob and a string object:

      $foo = ($str . $obj); $bar = ($obj . $str); $baz = "${obj}${str}";
      When is the blob promoted to a string, when does the opposite happen? Object representation and efficiency are certainly big concerns, but surely the semantic implications of unicode are far more insidious.

        Basically, you're suggesting changing the UTF8 flag to become a semantic indicator of a "decoded" string (along with the other changes necessary to make that happen). That might be possible, but it might be nicer if we could distinguish "binary (unknown)" from "binary (locale-encoded text)". But then again, the Windows API uses three encodings ("UNICODE", "ANSI" and "OEM").