You've confounded unicode with an encoding scheme (UTF-8). Some other encodings, say UCS-2, allow you to seek and read as in above example.
ASCII is a binary encoding, too. Being the simplest and a common default, there's rarely any problem with it.
Unicode is trouble, that's true. Its implementation almost invariably brings layers of abstraction, lasagna code, new levels of slow, new glitches, and sometimes, design changes that require extensive refactoring.
As an example, take the notion of double-width glyphs. Very useful, to be sure, but also quite disruptive. Fixed-width terminal is no more; cell addressing is no longer character addressing; text area layout re-flows as you edit it. Why stop there, in the limbo between character-cell and full GUI, why not assign a point width to every char?
| [reply] |
You've confounded unicode with an encoding scheme (UTF-8).
No. I haven't. I didn't mention any specific encoding, and I deliberately did not capitalise unicode.
You've erected a strawman.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
.
| [reply] |
Thank you BrowserUk.
Best regards, Karl
«The Crux of the Biscuit is the Apostrophe»
| [reply] |
The least useful property of unicode is that a trivial subset of it can appear to be 'simple text'.
I fully agree. It fails hard instead of failing safe.
Perl could mitigate that problem by keeping track of whether a string is decoded or not.
Recognise that unicode isn't a single format, but many formats all lumped together in a confused and confusing mess.
I don't follow. Who thinks UTF-8 and UTF-16le are the same format?
rationalise the formats to a single, fixed-width, self-identifying format.
Not sure "self-identifying" makes sense. length($a) + length($b) == length($a . $b) is a nice property. It's possible to cause hard failures on misuse without self-identification.
| [reply] [d/l] |
Who thinks UTF-8 and UTF-16le are the same format?
I'm pretty sure that I didn't say that any particular individual or group conflated those two, and any particular pairing of encoding schemes.
But, the scope for confusion is designed right into the standard:
Encoding Scheme Versus Encoding Form. Note that some of the Unicode encoding schemes have the same labels as the three Unicode encoding forms. This could cause confusion, so it is important to keep the context clear when using these terms: character encoding forms refer to integral data units in memory or in APIs, and byte order is irrelevant; character encoding schemes refer to byte-serialized data, as for streaming I/O or in file storage, and byte order must be specified or determinable.
If you've never had this conversation with a prospective employer/user, you're one of the few lucky guys working today:
"And the data will be supplied in Unicode files." -- "Which encoding?" -- "Que?" -- "The Unicode Standard currently defines 7 separate encoding schemes; and there are half a dozen or more now obsoleted by still commonplace other encodings that are routinely referred to as 'unicode'. Which encoding do want the program to accept?" -- "The 'normal' one of course." -- "There really isn't any such thing as a normal one. Each organisation tends to standardise on one or two of them; there is no consensus across organisations." -- "Hm. We'll have to accept them all then won't we." -- "But how will we know which one is contained in any particular file?" -- "I don't know. You're the programmers, that's your problem."
There are heuristics, but they require reading the entire file to make their guess. Which is fine if your files are a few 10s of kb, but when you routinely deal with files in the 10s and 100s of GB; its just plain broken.
It is also completely unnecessary. Network comms has worked perfectly fine for decades by specifying that comms should be done in network byte order.
The variable-length encoding schemes are legacy left-overs from the 90's when memory was measured in kb and disks in MB. A space optimisation that is way past its sell-by date.
And the only sane, fixed-length forms, UTF-32, is overkill for a standard that has a prespecified limit of 1,114,112 code points.
A single UTF-24 fixed-length format has a 15 times headroom; and would speed up just about every operation whether in memory or on disk.
Not sure "self-identifying" makes sense. length($a) + length($b) == length($a . $b) is a nice property.
Sorry, but that a crock. Every other widely used binary file format (image/sound/video files; wordprocessor/spreadsheet/database files; CAD/CAM files; compresses/zipped/coagulated files; etc.), use signatures. Not having them to avoid:
length($a) + length($b) - SIG_SIZE == length( SIG . substr( $a, SIG_SI
+ZE ) . substr( $b, SIG_SIZE ) )
is ridiculous. Like a supermarket chain failing to label their cans of food, in order to save the cost of the paper and glue.
If the files were self-identifying, commands like cp and copy that are used to concatenate files, could safely determine the contents of those files and do the concatenation correctly. As is, concatenating two 'unicode' files is impossible unless YOU (the operator) KNOW (and have verified) that they both contain the same encoding.
The lack of signatures just creates problems at every stage of the life of data. As does variable-length encoding.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] [select] |
| [reply] |
Or you could standardize the internal representation. A string is a sequence of code points. Storing the sequence length could be handy when dealing predominantly with string objects. Then the following cases arise:
- (nbytes == 0 && ncodepts == 0) trivial case/empty/false
- (nbytes > 0 && ncodepts == 0) binary blob
- (nbytes > 0 && ncodepts == nbytes) with UTF-8 internal rep, this means string is plain ASCII
- (nbytes > 0 && ncodepts < nbytes) generic unicode string
Extended 8-bit charsets (ISO8859) suffer with UTF-8 internal representation, unless you hack the (ncodepts==nbytes) to indicate native format...
More interesting is the interaction between objects. Considering a blob and a string object:
$foo = ($str . $obj);
$bar = ($obj . $str);
$baz = "${obj}${str}";
When is the blob promoted to a string, when does the opposite happen? Object representation and efficiency are certainly big concerns, but surely the semantic implications of unicode are far more insidious.
| [reply] [d/l] |
Basically, you're suggesting changing the UTF8 flag to become a semantic indicator of a "decoded" string (along with the other changes necessary to make that happen). That might be possible, but it might be nicer if we could distinguish "binary (unknown)" from "binary (locale-encoded text)". But then again, the Windows API uses three encodings ("UNICODE", "ANSI" and "OEM").
| [reply] |