in reply to Generating Unique numbers from Unique strings

Hi rjohn1,

When you've got 200000 strings like "dir1/dir2/dir3/config1.config2/config/test1", they likely hold a whole lot more than 32 bits of information each, which means that when you reduce each string down to 32 bits you will lose information. Depending on the algorithm, that results in a chance of collisions - two different strings might produce the same 32 bit number, so they won't be unique. You've already said you want the numbers to be unique, so I'm assuming this is not acceptable, but just in case, there are lots of functions to do that (just one example of many: String::CRC32).

Since you said you want to uniquely identify each string, then you'll have to use something like Perl's hashes (probably best and easiest, as BrowserUk already noted) or maybe an indexed database column.

Note I've said nothing about speed gains or losses so far. When you're thinking about an optimization, the first step is to measure: How slow is your current implementation? How fast do you need it to be? Also it'd be helpful if you could tell us a bit about your current implementation. Maybe there's an easy fix that we haven't seen yet, like using a hash instead of grepping an array. Does your script run continually, or does it get called multiple times for each incoming string?

Regards,
-- Hauke D

  • Comment on Re: Generating Unique numbers from Unique strings

Replies are listed 'Best First'.
Re^2: Generating Unique numbers from Unique strings
by rjohn1 (Sexton) on Apr 03, 2016 at 16:36 UTC

    Hi Hauke,

    Thanks for the explanation. 32 bit has 2^32 combinations so theoritically speaking it can represent 200000 strings easily without repeating. But the algorithm should be tailored for it. I am not sure if you agree.. I was curious if such a unique number generating algo exits for Perl

    Honestly speaking i did not think of keying my incoming strings in hashes as i was not sure of performance hit when doing an if(exists $storage{"String"})

    I would like to be the most efficient as it is a Perl/Tk GUI showing real time functional tests which fail with a given signature.

    So if it is not efficient the GUI faces some slowness even though i fork it off..

      Hi rjohn1,

      Are you really only expecting a total of 200000 unique strings across all runs of the program? Or is it 200000 different strings per run of the program? If it's the former, then sure, you could write a function that maps those 200000 strings to unique numbers. But if it's the latter, then remember that any algorithm you write has to handle all possible inputs across all runs of the program, and in that case 32 bits to represent them may no longer be enough, depending on your input.

      Anyway, all of this is very theoretical, including worrying about efficiency - I'd recommend that, knowing that Perl's hashes are already pretty fast, try writing some code. Not only will you then be able to say definitely whether the code runs too slow for your purposes or not, you'll have a baseline that you can compare any optimizations you make against. Optimization is not a matter of feeling, it's more of a science - measure the performance of the code to find which parts are running slow, try an optimization on that part of the code, measure to see if it made a difference, and so on. Of course there is some basic knowledge necessary, like for example knowing that hash lookups will outperform grep {$_ eq $what} @array or knowing what the Schwartzian transform is, but too much worrying also costs precious time :-)

      Regards,
      -- Hauke D

        Thanks Hauke. For me is 200000 strings per run of the program. Agree with your statements..

        From all the responses looks like traditional hashes should do the job. Anyways let me check the speed impact.

        As you rightly said it is science :) Appreciate your time and advices.

        I will try out the suggestions.. Good Day!

      Perl hashes are built in to Perl. They are implemented in carefully written and optimized C, so they run very fast. An algorithm you write in Perl will run on the Perl virtual machine, so will automatically run slower compared to the equivalent C code.

      Of course, you could use Inline::C and code your algorithm in C with in your Perl program, but using Perl's hashes will be a lot easier.