in reply to numeric representation of string

How can I get a numeric representation of a utf8 encoded string?

Essentially, you cannot for strings of any usable length.

Unicode strings stored in computers are essentially very big numbers stored base-131072 (if you stick to just the very basic multi-lingual subset).

With decimal numbers encoded as a string (eg. '12345'), each digit can be 0-9, so the size of the number grows by a factor of 10 for each extra digit. So by the time you've got 20 digits, you've exhausted the capacity of 64-bit integers. And if you move to floating point, you start loosing accuracy after just 15 digits.

For hex numbers encoded as a string, each extra digit adds a factor of 16, so you exhaust 64-bits ints with only 16 digits.

With Unicode you have 128k for each digit, so by the time you've got a 4 character string you've exceeded the capacity of a 64-int by a factor of 10.

So basically, you can give up on the idea of an accurate representation of a string by a number.

Then you move into the realm of 'lossy' representations. And there is a whole science (and a lot of bunkum) that attempts to produce 'comparative' numerical values from documents -- numbers derived from text that will when sorted numerically tend to group the documents by similarity. These are used for applications such as plagiarism detection. A simple starting point which may lead you in many directions.

Another approach might be to use a 'running checksum' to detect sections of similarity. For that approach the Rsynch algorithm is a useful starting point.

For the most part, if your goal is simply to save space in your db, you'd probably be better off using a simple compression algorithm.

Update: for completeness, you might find Re^3: Comparing sets of phrases stored in a database? enlightening.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^2: numeric representation of string
by mhearse (Chaplain) on Aug 16, 2013 at 00:37 UTC
    Wow. Thanks for the excellent and thorough answer. I think for starters I'll use your suggestion of a running checksum. Thanks again!

    Update: Just finished reading Re^3: Comparing sets of phrases stored in a database?. I'm going to do that as a project just for the fun of it. Great solution for determining similar phrases.

      Rolling checksums are great for detecting subsections that may be similar, but you then need a full 128-bit or better digest to check they actually are. And even that is not a 100% guarantee.

      Your first decision will be how big to make your rolling block size? Bigger saves more space, but very quickly reduces the odds of your finding common sections. Smaller increases you odds of hits; but also increases your odds of false positives and increases the number of full digests you need to calculate in order to deal with those false positives.

      And then you have the problem of how you store: 'a-bit-that-is-different' 'a-bit-that-is-the-same-as-some-chunk-of-some-other-email' 'a-bit-that-is-different' a-bit-that-is-the-same-...

      Where do you store bits that are common to 2 or more emails? And how do you reference it from the places you removed it?

      If your reference mechanism is (say) the 128-bit digest of the common section, that means the size of your rolling checksum block will need to be at least twice that for you to achieve any space saving at all; and probably 4 times to be of merit.

      And remember, although the Rsync algorithm is designated as O(N); that is for comparing one pair of files or documents. If you are to do a full cross-compare of all your emails one against the other, you are looking at an O(N2) (or O((N-1)*(N/2)) if you're smart about it) process.

      In the end; you'd almost certainly get better compression and save gobs of time and cpu by using gzip or similar.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        If you are to do a full cross-compare of all your emails one against the other, you are looking at an O(N2) (or O(N!) if you're smart about it) process.

        Did you forget a word? I'd be interested to see the case where the smart algorithm is O(N!), and the naive one is O(N2).

        I agree. My current code inserts email bodies to a compressed table. And that's it. Another simple idea I had was to break up the body by word boundary. Storing it in an array, then doing a bulk insert ignore into a unique column. Might look something like this.... although this a probably a pipe dream. But seems logical... at least based on my stunted repetitive vocabulary. Would have the benefit of being fast due to the lack of compression.
        CREATE TABLE words ( rowid INT UNSIGNED NOT NULL AUTO_INCREMENT, word VARCHAR(255) NOT NULL UNIQUE ) ENGINE=InnoDB CHARACTER SET=utf8;
        CREATE TABLE body ( rowid INT UNSIGNED NOT NULL AUTO_INCREMENT, word_order_num INT UNSIGNED NOT NULL, word_rowid FOREIGN KEY REFERENCES words(rowid) NOT NULL ) ENGINE=InnoDB CHARACTER SET=utf8;