Hi rjohn1,
When you've got 200000 strings like "dir1/dir2/dir3/config1.config2/config/test1", they likely hold a whole lot more than 32 bits of information each, which means that when you reduce each string down to 32 bits you will lose information. Depending on the algorithm, that results in a chance of collisions - two different strings might produce the same 32 bit number, so they won't be unique. You've already said you want the numbers to be unique, so I'm assuming this is not acceptable, but just in case, there are lots of functions to do that (just one example of many: String::CRC32).
Since you said you want to uniquely identify each string, then you'll have to use something like Perl's hashes (probably best and easiest, as BrowserUk already noted) or maybe an indexed database column.
Note I've said nothing about speed gains or losses so far. When you're thinking about an optimization, the first step is to measure: How slow is your current implementation? How fast do you need it to be? Also it'd be helpful if you could tell us a bit about your current implementation. Maybe there's an easy fix that we haven't seen yet, like using a hash instead of grepping an array. Does your script run continually, or does it get called multiple times for each incoming string?
Regards,
-- Hauke D
In reply to Re: Generating Unique numbers from Unique strings
by haukex
in thread Generating Unique numbers from Unique strings
by rjohn1
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |