how about a meditation? ;)
Given the kneejerk responses in this thread, I think that would be pointless.
In a nutshell:
On disk, standard compression algorithms are far more effective.
In memory, whatever small space savings seemed desirable in 199x; makes no sense when you can get a 64GB DIMM for £140.
The penalty for any and every algorithm that slices and dices strings is huge. The inability to directly index into strings screws with every advance searching, sorting and comparison algorithm ever devised.
It is a ridiculous state of affairs that this has been foisted on the world.
A fixed, 3-byte/24-bit encoding would cover every eventuality. It would allow the representation of every character in indexable strings; and leave space for embedded identification!
If I suggested that I would distribute multiple formats of graphics file -- png/jpg/gif/tiff; 2/4/8/24/32 bpp; huffman/RLE/LZW/none; RGB/CMYK/YCbCr etc.-- in headerless files and then suggest that people could use heuristics to try guess what they are; I'd be rightly condemned as crazy.
What the Unicode Consortium has foisted on the world is exactly equivalent.
It exists; and railing against it now is pretty pointless; but I'm at a (apparently enviable) point in my career and life where I do not have to deal with this crock, so I do not.
In reply to Re^5: Encoding Problem - UTF-8 (I think)
by BrowserUk
in thread Encoding Problem - UTF-8 (I think)
by Melly
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |