Re: Populating a hash
by ww (Archbishop) on Mar 11, 2012 at 13:18 UTC
|
| [reply] |
|
|
| [reply] |
|
|
Anytime you have to preface a desired result with "hopefully," you have a wish, not a plan.
Some suggestions for alternate coding are surely forthcoming ... but my suggestion is "benchmark;" don't just hope, know. Fewer lines does emphatically NOT eq "faster" code or "lower cpu utilization." There's so much going on behind the green curtain (cf Wizard of Oz) that benchmarking or intimate familiarity with perlguts (which labels itself only as an "introduction" to the Perl API) and the deeper reaches of Perl wisdom and C will you be able to confidently predict any particular case.
And not just BTW, it's my suspicion that Perl will usually do a better job of optomizing-away bits of my code than I will.
| [reply] [d/l] [select] |
|
|
ww: how can you change the stograge style of a data structure?
and whenever it comes to efficiency, i dnt think that programmer effort is taken into picture.
I have always taken efficiency as cpu and time consumption
| [reply] |
|
|
storage? any one of many compression techniques. Think '.zip' or RLE or bit vectors or ....
efficiency? Your usual choices are usually valid, but would you consider them the best measure of the "efficiency" of producing a one-off or rarely used program? Being concerned solely with "cpu and timeUpdate1 consumption" and ignoring "programmer time" might be a serious case of premature optimization.
Update 1: I assume from context that the "time" reference meant "run time."
| [reply] |
|
|
how can you change the stograge style of a data structure?
use Storable
i dnt think that programmer effort is taken into picture.
I have always taken efficiency as cpu and time consumption
programmer effort consumes time
| [reply] |
Re: Populating a hash
by RichardK (Parson) on Mar 11, 2012 at 13:05 UTC
|
I don't know. Have you measured it?
You could use Benchmark to test it against other implementations and see.
| [reply] |
Re: Populating a hash
by morgon (Priest) on Mar 11, 2012 at 15:39 UTC
|
I believe (not 100% sure though) that in your case (as you are putting one million keys into your hash and I assume you want to get them out again later) you should preallocate space by telling perl you need more buckets -i.e.
my %hash;
keys(%hash)=1_000_000;
The number you assign to keys is rounded up to the next power of two.
My understanding is (I would be interested to learn if this is correct) that when you have more buckets you have less collisions (i.e. entries that fall into the same bucket) and so retrieval of an entry is more efficient. | [reply] [d/l] |
|
|
| [reply] |
|
|
| [reply] |
|
|