Hi rjohn1,
Are you really only expecting a total of 200000 unique strings across all runs of the program? Or is it 200000 different strings per run of the program? If it's the former, then sure, you could write a function that maps those 200000 strings to unique numbers. But if it's the latter, then remember that any algorithm you write has to handle all possible inputs across all runs of the program, and in that case 32 bits to represent them may no longer be enough, depending on your input.
Anyway, all of this is very theoretical, including worrying about efficiency - I'd recommend that, knowing that Perl's hashes are already pretty fast, try writing some code. Not only will you then be able to say definitely whether the code runs too slow for your purposes or not, you'll have a baseline that you can compare any optimizations you make against. Optimization is not a matter of feeling, it's more of a science - measure the performance of the code to find which parts are running slow, try an optimization on that part of the code, measure to see if it made a difference, and so on. Of course there is some basic knowledge necessary, like for example knowing that hash lookups will outperform grep {$_ eq $what} @array or knowing what the Schwartzian transform is, but too much worrying also costs precious time :-)
Regards, -- Hauke D
| [reply] [d/l] |
Thanks Hauke. For me is 200000 strings per run of the program. Agree with your statements.. From all the responses looks like traditional hashes should do the job. Anyways let me check the speed impact. As you rightly said it is science :) Appreciate your time and advices. I will try out the suggestions.. Good Day!
| [reply] |
Hi rjohn1,
I should probably add that what I said about algorithms mapping strings to 32-bit numbers applies to hashing/checksumming functions. There may be other algorithms or data structures that could be used to check incoming strings for matches against an existing set of strings (like a tree structure, maybe trie), but that's not my area of expertise. But I'd still try Perl's hashes first, at the very least to get a quick prototype implementation or a baseline, but it may very well turn out they'll be good enough for you.
Regards, -- Hauke D
| [reply] |
Perl hashes are built in to Perl. They are implemented in carefully written and optimized C, so they run very fast. An algorithm you write in Perl will run on the Perl virtual machine, so will automatically run slower compared to the equivalent C code.
Of course, you could use Inline::C and code your algorithm in C with in your Perl program, but using Perl's hashes will be a lot easier.
| [reply] |