Thank everyone for the reply
Let me explain more about my project. I have 20M email listing (saved in MySQL) which we want to track in our campaigns. So instead of using original emails on the related URLs, we want to use IDs. 20-byte hexadecimal is good enough for uniqueness of 20M emails. While I am trying to find better options, i.e., if we can use shorter ID, we can significantly reduce the table size and there are more benefit from doing so...
For testing purpose, I used 3M emails and 8-byte ID and Digest::SHA1 and Convert::zBase32 to build the IDs
sub setid_1
{
return undef if not $_[0];
return substr( sha1_hex(_secret_code() . $_[0]), 0, 8 );
}
sub setid_2
{
return undef if not $_[0];
return substr( encode_zbase32(sha1_hex(_secret_code() . $_[0])), 0
+, 8 )
}
I was hoping the second one could at least give me better uniqueness. but the result is: I got 773 duplicated IDs from setid_1 and 676131 duplicated IDs from setid_2.
Any better way to handle such issue?
thanks again
lihao | [reply] [d/l] |
Well, 32 signed bits gives about 2,000M possibilities (2GB).
If you use all 32 bits, you get 2x as much.
Obviously you can have the DB generate those numbers.
It sounds like what you want to is encode some long string
into something a LOT shorter representation that doesn't have to be unique.
This is a hash function.
There are all sorts of hash functions, but since we are programming
in Perl, my initial thought would be: how does Perl do this?
This is the Perl hash function written in C:
int i = klen;
unsigned int hash = 0;
char *s = key;
while (i--)
hash = hash * 33 + *s++;
klen is the number of characters to be encoded.
12345678901234567890123456789012345678901234567
the string above has 47 chars and would be klen
for example.
Internally Perl chops down the number of bits to get a
practical index number by: index= hash & xhv_max. In your case,
forget it - don't worry about what xhv_max is, use all the bits! (well maybe you should consider implications of the sign bit)
If 32 bits isn't enough, then 64 bits is gonna do it, that's 8 bytes. Don't mess with 48 bits or something like that. Powers of 2 are magic on most machines commonly in use, eg: 2,4,8,16,32,64,128.
In summary, forget about SHA-1 or SHA-2 or any other form of encryption, use an
efficient hash encoding technique. I would try a 32 or 64 bit version of
what Perl itself does!
Update:The Perl hash algorithm works very well based upon my subjective empirical judgment with just 120K hash key structures. Anyway, I am confident that 20 bytes aren't necessary and that 8 bytes will yield the "uniqueness" that you need. That would allow the keys to be resident in memory. But, I think that the idea of using a DB is even better as it scales gracefully to HUGE structures.
| [reply] [d/l] [select] |