Output shows these are same.
This warning is important warning: too few iterations for a reliable count
You never actually benchmark with_keys or with_each, you benchmark their return values intead
Consider
#!/usr/bin/perl -- use strict; use warnings; { use Benchmark; print "##########\n"; print scalar gmtime, "\n"; print "perl $] \n"; for my $iterKeys ( [ -3, 100_000], [ 10, 1_000_000 ] ){ my $count = $iterKeys->[1]; my %hash = map { $_ => !!0 } 0 .. $count; print "count $count\n"; Benchmark::cmpthese( $iterKeys->[0], { for_keys => sub { for my $k( keys %hash ){ } return; }, while_each => sub { while( my($k,$v) = each %hash ){ } return; }, }, ); print "\n"; } print "\n"; print scalar gmtime, "\n"; print "\n"; } __END__ ########## Fri Nov 11 08:45:57 2011 perl 5.014001 count 100000 Rate while_each for_keys while_each 6.08/s -- -62% for_keys 16.2/s 166% -- count 1000000 s/iter while_each for_keys while_each 1.85 -- -59% for_keys 0.761 143% -- Fri Nov 11 08:47:19 2011
I didn't test past 1 million keys, my machine is ancient, and I didn't feel like swapping, but you can observe a trend -- throwing memory at the problem (for_keys) is faster while it fits in ram, but it will eventually reverse
In reply to Re^6: Finding the size of a nested hash in a HoH
by Anonymous Monk
in thread Finding the size of a nested hash in a HoH
by Jeri
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |