in reply to Re: RFC: Is this the correct use of Unicode::Collate?
in thread RFC: Is this the correct use of Unicode::Collate?
tchrist,
A "common" practice for handling duplicate names in a database is to append non-printable characters after the name, in the order of insertion. This is like using base 32 (numbers 0 to 31 ) for appended characters. This allows duplicates and retains the order of insertion. You don't have a limit since when you fill the first character, you just add another as "\0" and continue from there. That would be broken with Unicode::Collate.
The implication in the article was that you could replace 'sort' with 'Unicode::Collate'.
Thank you
"Well done is better than well said." - Benjamin Franklin
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: RFC: Is this the correct use of Unicode::Collate?
by moritz (Cardinal) on Jan 17, 2012 at 15:39 UTC | |
The implication in the article was that you could replace 'sort' with 'Unicode::Collate'. And that seems to be the real problem. sort isn't broken (that's just link baiting), and neither is Unicode::Collate. They just do different things. The article does say Fortunately, you don't have to come up with your own algorithm for dictionary sorting, because Perl provides a standard class to do this for you: Unicode::Collate So despite its title, it doesn't mandate UC to be a universal replacement for sort, but just for one application. | [reply] |
by flexvault (Monsignor) on Jan 17, 2012 at 16:10 UTC | |
moritz, But all the references in the article are related to data in databases. I goggled ASCII and UTF-8, and found many times "...UTF-8 uses one byte for any ASCII characters, which have the same code values in both UTF-8 and ASCII encoding...", so why are the 0 - 127 characters being redefined? I understand the complexity of the subject, but the designers of UTF-8 knew better than to mess with ASCII, and that is why UTF-8 enhances ASCII. 'Unicode::Collate' is core, so it could be used a lot in the future, as it should be. But a lot of production environments will be affected if they don't know in advance that the code points of ASCII have been redefined. My hope was that someone would say 'ASCII => 1' will work like Perl 'sort' for ASCII characters and UTF-8, etc for anything above 127. Thank you "Well done is better than well said." - Benjamin Franklin | [reply] |
by tchrist (Pilgrim) on Jan 17, 2012 at 18:53 UTC | |
'Unicode::Collate' is core, so it could be used a lot in the future, as it should be. But a lot of production environments will be affected if they don't know in advance that the code points of ASCII have been redefined.A text sort looks nothing at all like a code point sort. You seem to think that 7 bit code points should not sort as a text. That completely defaults the whole purpose. Watch here to see what really happens: See? A text sort looks nothing whatsoever like a code point sort. If you expect the UCA to do a code-point sort on 7-bit code points but a text sort on everything else, I fear that you have gravely misunderstood its purpose and consequences. So what may I do to help you understand this better? I would seriously like to know. chr 40 ( chr 41 ) chr 91 | [reply] [d/l] |
by flexvault (Monsignor) on Jan 17, 2012 at 20:06 UTC | |
|
Re^3: RFC: Is this the correct use of Unicode::Collate?
by tchrist (Pilgrim) on Jan 17, 2012 at 16:19 UTC | |
A "common" practice for handling duplicate names in a database is to append non-printable characters after the name, in the order of insertion. This is like using base 32 (numbers 0 to 31 ) for appended characters. This allows duplicates and retains the order of insertion. You don't have a limit since when you fill the first character, you just add another as "\0" and continue from there. That would be broken with Unicode::Collate.I’m afraid you’ve swapped my implication with your inference, as I implied no such thing — and what you’ve inferred in no way follows from what I wrote. Quoting myself, I wrote: If you have code that purports to sort text that looks like this:See the red part? Clearly, you do not have ‘code that purports to sort text’! Therefore, nothing I wrote applies to you.@sorted_lines = sort @lines;Then all you have to get a dictionary sort is write instead: You have code that blindly does a mindless numeric sort on code points, not an alphabetic sort on text. What you are doing is not an alphabetic sort. Plus sorting of textual representations of numbers is specifically outside the scope of the UCA. Of course it’s trivial to modify the UCA sort to take care of your weirdo situation, such that it does a proper text sort on the text and a weirdo binary sort on the binary. But you have to tell it to do that. It doesn’t play mind games with you; here as always, one has to know what one is doing, and why.
| [reply] [d/l] [select] |
by flexvault (Monsignor) on Jan 17, 2012 at 19:05 UTC | |
tchrist,
I re-checked and you are correct about the red part, and I was wrong for quoting you out of context. I apologize.
Do I understand you correctly that it can be done? I have read the docs on CPAN and the perldoc on my system, and I don't see how to do this. I know you think my request is "...weirdo binary sort on the..." ASCII, but I could give many instances of real-life uses where both text and binary co-exist and require sorting. One example: a desktop calendar program where all events are in a database server. The key part of key/value pair, would contain binary ASCII data(time, duration, etc) as well as the title for the event and possible sequencing information (base 32). The data value would be a description of the event. No sorting required for that and it could be UTF-nn or ASCII. The database engine doesn't care about the data portion, only the key matters. It would be wonderful if the database engine could sort the key information so the language of the title was handled correctly and the ASCII portion is also handled correctly. Thank you "Well done is better than well said." - Benjamin Franklin | [reply] |
|
Re^3: RFC: Is this the correct use of Unicode::Collate?
by Jim (Curate) on Jun 24, 2012 at 02:13 UTC | |
A "common" practice for handling duplicate names in a database is to append non-printable characters after the name, in the order of insertion. What you need is an invisible letter in Unicode. Just such a letter was proposed several years ago by typographer Michael Everson. His proposed name for the character was INVISIBLE LETTER. Unfortunately, the Unicode Consortium rejected his proposal. See Proposal to add INVISIBLE LETTER to the UCS and Every character has a story #11: U+???? (The Invisible Letter) If there were such an invisible Unicode character, you could do something like this:
This script produces this output:
(For the purpose of demonstrating more than two presidents with the same last name, I had to assume Barack Obama is re-elected in 2012 and Jeb Bush is elected in 2016. I'm sorry if this prospect offends you.) This is a pure Unicode solution to the problem. There's no commingling of Unicode characters or graphemes with binary data. Unfortunately, however, there isn't a Unicode character with the general property L (Letter) that's guaranteed to be invisible. If there were, it would be just the right character to use for this "weirdo" purpose. Why did I use the Unicode character LATIN SMALL LIGATURE FFL in the demo script? I don't know exactly. Maybe because it's a character that collates high and seems impossibly unlikely ever to occur in real data. Jim | [reply] [d/l] [select] |
by flexvault (Monsignor) on Jun 24, 2012 at 10:35 UTC | |
Jim, Thank you for you input. You seem to know quite a bit about Unicode. What I tried to ask in the original post was why 'use Unicode::Collate;' changed the meaning of characters 0..31? Everything I have read, talked about not changing the meaning of 7bit ASCII. History of the question: I don't know if you are familiar with the NoSQL database engine BerkeleyDB (now owned by Oracle), but I have written a pure perl replacement that performs as well. In some cases where the data portion of the key/value pair are very large, it outperforms BerkeleyDB. Most people on this forum, believe that BerkeleyDB is free. Oracle has added some conditions that make it very expensive( our law firm's counsel ). One example: If a company employee downloads BerkeleyDB and installs it, that's okay. But as a software vendor, if I download it and install it, the company owes Oracle a fee based on number of cores and type of box. For a power7 IBM p-series with 32 cores, the license fee is $ 48,000. for the "free" BerkeleyDB. Most of our products sell for under $ 5,000. Hard to ask a company to pay an additional $48K. Since the PurePerlDB already exists, I was looking at adding a feature to use Unicode::Collate, but it broke other features of PurePerlDB. Unfortunately, my only solution now was to put the burden on the software developer to handle Unicode and duplicates, which is the same as BerkeleyDB. Thanks again for your input...Ed "Well done is better than well said." - Benjamin Franklin | [reply] |
by Anonymous Monk on Jun 24, 2012 at 11:29 UTC | |
Most people on this forum, believe that BerkeleyDB is free. Oracle has added some conditions that make it very expensive( our law firm's counsel ). One example: If a company employee downloads BerkeleyDB and installs it, that's okay. But as a software vendor, if I download it and install it, the company owes Oracle a fee based on number of cores and type of box. For a power7 IBM p-series with 32 cores, the license fee is $ 48,000. for the "free" BerkeleyDB. Just in case anyone was wondering about it, see my take on it in Open Source License for Berkeley DB unchanged The situation hasn't changed with the latest Berkeley db-5.3.21 , license is essentialy the same, though there is an addition of ASM for Java (only affects java bits, doesn't affect distribution / pricing ) But I'm not a businessman or a lawyer or work for oracle regarding http://www.flexbasedb.com/, I notice you don't provide html only pdf, minor hassle For anyone interested about PurePerlDB/FlexBaseDB, from http://www.flexbasedb.com/FlexBaseDB_Introduction.pdf
| [reply] [d/l] |
by flexvault (Monsignor) on Jun 24, 2012 at 12:09 UTC | |
by Anonymous Monk on Jun 25, 2012 at 16:07 UTC | |
by Jim (Curate) on Jun 24, 2012 at 18:08 UTC | |
I don't know if you are familiar with the NoSQL database engine BerkeleyDB (now owned by Oracle), but I have written a pure perl replacement that performs as well. In some cases where the data portion of the key/value pair are very large, it outperforms BerkeleyDB. I'm familiar with NoSQL and key-value stores such as Berkeley DB. But what I'd never heard of before reading your PerlMonks post is the idiom—the trick—of modifying data to disambiguate otherwise identical keys by appending control codes or invisible characters to them. This idiom seems "weirdo" to me, just as it did to Tom, who first invoked the word to describe it. Is my example Perl script a fair representation of the idiom your NoSQL database software uses to disambiguate like keys? I'm not a database theory guru or a database programming wizard, but my gut sense is that the idiom you describe of ornamenting data with invisible control codes or other characters is fraught with problems. I understand how data modified this way would ensure uniqueness and preserve insertion order. But how then do you match such modified strings? Isn't there a better way to achieve the same objectives without altering data? Do other NoSQL database engines besides yours use this same idiom? If so, which ones? Jim | [reply] |