in reply to constructing large hashes

The 70mb it slurps is due to the @x array alone; I added a getc() to be able to check. I also added a third method - with interesting results.
#!/usr/bin/perl -w use strict; use Benchmark qw(cmpthese); my @x = map join(',', split(//, rand(10000))), 0..80000; my %y; print "Check memory now.", $/; getc(); cmpthese(20, { map => sub { undef %y; %y = map(($_ => undef), @x) } for => sub { undef %y; @y{$_} = undef for @x; }, slice => sub { undef %y; @y{@x} = undef }, }); print scalar %y,$/; __END__ Check memory now. Benchmark: timing 20 iterations of for, map, slice... for: 16 wallclock secs (15.47 usr + 0.27 sys = 15.74 CPU) @ 1 +.27/s (n=20) map: 28 wallclock secs (26.67 usr + 0.30 sys = 26.97 CPU) @ 0 +.74/s (n=20) slice: 14 wallclock secs (13.54 usr + 0.24 sys = 13.78 CPU) @ 1 +.45/s (n=20) Rate map for slice map 0.742/s -- -42% -49% for 1.27/s 71% -- -12% slice 1.45/s 96% 14% -- 59853
Update: had mixed up the for and slice labels. for doesn't win.

Makeshifts last the longest.

Replies are listed 'Best First'.
Re: Re: constructing large hashes
by duelafn (Parson) on Oct 01, 2002 at 00:26 UTC

    Hmmm, after hearing the responses above, I bet that the memory differences come from both a) floats are larger than short strings and b) there is more structure involved in this hash, which is what is causing the faster lookup times.

    Also, you might want to correct the labels in your benchmarks, lest we be led astray!

    Thanks!
        Dean


    If we didn't reinvent the wheel, we wouldn't have rollerblades.

      What the.. I must have been half asleep already. Sorry. Fixed it, and now I'm off to bed. Thanks for the wrist slap.

      Makeshifts last the longest.