in reply to Question: methods to transfer a long hexadicimal into shorter string

So you have 20M records. For each of those records, you have generated a SHA-1 signature. First, that is "not ok" as an ID from the "get-go" because a SHA-1 signature is not guaranteed to be unique. The idea of compressing a non-unique set of bits into a smaller set of bits that is unique just doesn't make sense!

You haven't explained how big this DB is? I guess that it is possible although VERY unlikely that this DB is small enough to be memory resident.

If we just think about storing just the 20M SHA-1 signatures, each is 20 bytes. For the hardware, powers of 2 are magic and it goes: 2,4,8,16,32. In a practical sense, each signature will take 32 bytes: 8 32 bit(4 byte) words or 16 16 bit(2 byte) words. That is a fair amount of memory for 20M records (like 640MB) and these "keys", (they are SHA-1 signatures) aren't even unique! I don't know what your plan is to deal with that. Oh, of course besides the memory to store the SHA-1 signatures, there has to be some data that points to something (on disk or wherever). That will take some bytes too!

You need a Database. Perl DBI in its many flavors can easily handle 20M records. Forget SHA-1 or SHA-2 that makes no sense. Let the DB use its hash algorithm.

  • Comment on Re: Question: methods to transfer a long hexadicimal into shorter string

Replies are listed 'Best First'.
Re^2: Question: methods to transfer a long hexadicimal into shorter string
by lihao (Monk) on Aug 07, 2009 at 21:16 UTC

    Thank everyone for the reply

    Let me explain more about my project. I have 20M email listing (saved in MySQL) which we want to track in our campaigns. So instead of using original emails on the related URLs, we want to use IDs. 20-byte hexadecimal is good enough for uniqueness of 20M emails. While I am trying to find better options, i.e., if we can use shorter ID, we can significantly reduce the table size and there are more benefit from doing so...

    For testing purpose, I used 3M emails and 8-byte ID and Digest::SHA1 and Convert::zBase32 to build the IDs

    sub setid_1 { return undef if not $_[0]; return substr( sha1_hex(_secret_code() . $_[0]), 0, 8 ); } sub setid_2 { return undef if not $_[0]; return substr( encode_zbase32(sha1_hex(_secret_code() . $_[0])), 0 +, 8 ) }

    I was hoping the second one could at least give me better uniqueness. but the result is: I got 773 duplicated IDs from setid_1 and 676131 duplicated IDs from setid_2.

    Any better way to handle such issue?

    thanks again

    lihao

      Well, 32 signed bits gives about 2,000M possibilities (2GB).
      If you use all 32 bits, you get 2x as much.
      Obviously you can have the DB generate those numbers.

      It sounds like what you want to is encode some long string into something a LOT shorter representation that doesn't have to be unique.
      This is a hash function.

      There are all sorts of hash functions, but since we are programming in Perl, my initial thought would be: how does Perl do this?

      This is the Perl hash function written in C:

      int i = klen; unsigned int hash = 0; char *s = key; while (i--) hash = hash * 33 + *s++;
      klen is the number of characters to be encoded. 12345678901234567890123456789012345678901234567 the string above has 47 chars and would be klen for example.
      Internally Perl chops down the number of bits to get a practical index number by: index= hash & xhv_max. In your case, forget it - don't worry about what xhv_max is, use all the bits! (well maybe you should consider implications of the sign bit)

      If 32 bits isn't enough, then 64 bits is gonna do it, that's 8 bytes. Don't mess with 48 bits or something like that. Powers of 2 are magic on most machines commonly in use, eg: 2,4,8,16,32,64,128.

      In summary, forget about SHA-1 or SHA-2 or any other form of encryption, use an efficient hash encoding technique. I would try a 32 or 64 bit version of what Perl itself does!

      Update:The Perl hash algorithm works very well based upon my subjective empirical judgment with just 120K hash key structures. Anyway, I am confident that 20 bytes aren't necessary and that 8 bytes will yield the "uniqueness" that you need. That would allow the keys to be resident in memory. But, I think that the idea of using a DB is even better as it scales gracefully to HUGE structures.