in reply to Re^2: Finding the size of a nested hash in a HoH
in thread Finding the size of a nested hash in a HoH

If each were removed from the language, would anyone really miss it? I know it'd break some legacy code, although I doubt I've used it more than once every few years. But is there ever a time when each saves more than a line of code over using keys and then getting the value in a second step? (Hmm, I'm off to search and see if there's already been a "what's the least useful core function" thread.)

Aaron B.
My Woefully Neglected Blog, where I occasionally mention Perl.

Replies are listed 'Best First'.
Re^4: Finding the size of a nested hash in a HoH
by BrowserUk (Patriarch) on Nov 10, 2011 at 17:16 UTC
    . But is there ever a time when each saves more than a line of code over using keys and then getting the value in a second step?

    each really comes into its own when processing really large hashes.

    keys in a list context, such as a for loop:

    for my $key ( keys %hash ) { my $val = $hash{ $key }; ## ... }

    Creates a list of all the keys, which uses a substantial amount of memory and therefore time. Especially if it forces the process into swapping.

    Conversely, while each:

    while( my( $key, $val ) = each %hash ) { ## ... }

    Uses negligible extra memory and iterates far more quickly because of it.

    When operating with large hashes at the limits of memory, it can be the difference between seconds and hours of processing time.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      It may be not good to have large hash which takes much memory. Sometimes I met such case with my poor program,I am interested in this topic.

      As a result, "keys" seems faster and smaller than "each" on FreeBSD 8.2, perl 5.12.2. This is first time I use GTop, so there may be something wrong with my test script. In such case, let me know about it, please.

      this is perl script with GTop and HiRes.
      #!/usr/bin/perl use strict; use warnings; use GTop(); use Time::HiRes; my($gtop,$max,%h,@t); $gtop=new GTop; $t[0]=Time::HiRes::time(); printf "###count=$ARGV[1],$ARGV[0]###\n"; p("before"); $max=$ARGV[1]; %h=map { $_ => "test" } (1 .. $max); p("after hash"); if ($ARGV[0] eq "keys"){ &with_keys(); } elsif($ARGV[0] eq "each") { &with_each(); } else { print "else\n"; } p("after loop"); $t[1]=Time::HiRes::time(); printf "time=%.3f\n", ($t[1]-$t[0]); exit;
      And shell script to kick this.
      perl tmp.pl keys 10000 > log perl tmp.pl each 10000 >> log perl tmp.pl keys 100000 >> log perl tmp.pl each 100000 >> log perl tmp.pl keys 1000000 >> log perl tmp.pl each 1000000 >> log perl tmp.pl keys 2000000 >> log perl tmp.pl each 2000000 >> log
      Result log is as below.
      ###count=10000,keys### before: size=10035200,vsize=10035200,resident=5500928,share=5143114,rs +s=5500928 after hash: size=13180928,vsize=13180928,resident=8306688,share=514311 +4,rss=8306688 after loop: size=13180928,vsize=13180928,resident=8376320,share=514311 +4,rss=8376320 time=0.043 ###count=10000,each### before: size=10035200,vsize=10035200,resident=5541888,share=5143114,rs +s=5541888 after hash: size=13180928,vsize=13180928,resident=8347648,share=514311 +4,rss=8347648 after loop: size=13180928,vsize=13180928,resident=8417280,share=514311 +4,rss=8417280 time=0.050 ###count=100000,keys### before: size=10035200,vsize=10035200,resident=5541888,share=5143114,rs +s=5541888 after hash: size=43589632,vsize=43589632,resident=39514112,share=51431 +14,rss=39514112 after loop: size=43589632,vsize=43589632,resident=39514112,share=51431 +14,rss=39514112 time=0.689 ###count=100000,each### before: size=10035200,vsize=10035200,resident=5541888,share=5143114,rs +s=5541888 after hash: size=43589632,vsize=43589632,resident=39514112,share=51431 +14,rss=39514112 after loop: size=43589632,vsize=43589632,resident=39514112,share=51431 +14,rss=39514112 time=0.799 ###count=1000000,keys### before: size=10035200,vsize=10035200,resident=5545984,share=5143114,rs +s=5545984 after hash: size=296296448,vsize=296296448,resident=282710016,share=51 +43114,rss=282710016 after loop: size=297345024,vsize=297345024,resident=282718208,share=51 +43114,rss=282718208 time=7.389 ###count=1000000,each### before: size=10035200,vsize=10035200,resident=5545984,share=5143114,rs +s=5545984 after hash: size=296296448,vsize=296296448,resident=282710016,share=51 +43114,rss=282710016 after loop: size=297345024,vsize=297345024,resident=282718208,share=51 +43114,rss=282718208 time=8.522 ###count=2000000,keys### before: size=10035200,vsize=10035200,resident=5545984,share=5143114,rs +s=5545984 after hash: size=582557696,vsize=582557696,resident=354185216,share=51 +43114,rss=354185216 after loop: size=583606272,vsize=583606272,resident=360177664,share=51 +43114,rss=360177664 time=103.454 ###count=2000000,each### before: size=10035200,vsize=10035200,resident=5484544,share=5143114,rs +s=5484544 after hash: size=582557696,vsize=582557696,resident=359972864,share=51 +43114,rss=359972864 after loop: size=583606272,vsize=583606272,resident=352419840,share=51 +43114,rss=352419840 time=268.264

      After keys counts exceed million, it seems iteration needs some extra memory, and memory consumption is same between "keys" and "each". "keys" becomes very fast in 2 million case. But Benchmark shows different result.

      #!/usr/bin/perl use strict; use warnings; use Data::Dumper; use Benchmark qw/cmpthese timethese/; my($max,%h); $max=1000000; %h=map { $_ => "test" } (1 .. $max); sub with_keys{ foreach my $k (keys %h){ #no proc } } sub with_each{ while( my($k,$v)=each %h){ #no proc } } cmpthese( timethese( 100, { 'with keys'=> &with_keys, 'with each'=> &with_each, } ) );
      Output shows these are same.
      Benchmark: timing 100 iterations of with each, with keys... with each: 0 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) (warning: too few iterations for a reliable count) with keys: 0 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) (warning: too few iterations for a reliable count) Rate with each with keys with each 100000000000000000/s -- 0% with keys 100000000000000000/s 0% --
      I left my pc alone while running my test scripts...I believe. Would someone give me insights about these result? I wonder why?

        Your test methodology seems broken to me.

        1. In the first script you are calling subroutines with_kets() and with_each() but they do not appear in the script?
        2. In the second script, if the keys test moves the process into swapping, then the each test will be equally affected.

        And the output of GTop seems very muddled to me.

        It is easy to verify that keys in a for loop creates a list of the keys (which there for consumes extra memory) by running this:

        perl -E"my %h = 1..100; for my $k(keys %h){ undef %h; say scalar %h; s +ay $k }"

        It constructs a hash, enters a for loop using keys, and then undefs the hash and display its (zero) on the first (and every) iteration. The for loop iterates through all the keys despite that the hash has been emptied. Therefore a separate list of the keys must have been constructed.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        Output shows these are same.

        This warning is important warning: too few iterations for a reliable count

        You never actually benchmark with_keys or with_each, you benchmark their return values intead

        Consider

        #!/usr/bin/perl -- use strict; use warnings; { use Benchmark; print "##########\n"; print scalar gmtime, "\n"; print "perl $] \n"; for my $iterKeys ( [ -3, 100_000], [ 10, 1_000_000 ] ){ my $count = $iterKeys->[1]; my %hash = map { $_ => !!0 } 0 .. $count; print "count $count\n"; Benchmark::cmpthese( $iterKeys->[0], { for_keys => sub { for my $k( keys %hash ){ } return; }, while_each => sub { while( my($k,$v) = each %hash ){ } return; }, }, ); print "\n"; } print "\n"; print scalar gmtime, "\n"; print "\n"; } __END__ ########## Fri Nov 11 08:45:57 2011 perl 5.014001 count 100000 Rate while_each for_keys while_each 6.08/s -- -62% for_keys 16.2/s 166% -- count 1000000 s/iter while_each for_keys while_each 1.85 -- -59% for_keys 0.761 143% -- Fri Nov 11 08:47:19 2011

        I didn't test past 1 million keys, my machine is ancient, and I didn't feel like swapping, but you can observe a trend -- throwing memory at the problem (for_keys) is faster while it fits in ram, but it will eventually reverse

      I remember reading that in the camel book, but I didn't know if it still worked that way. I'll keep each in mind for if I'm ever dealing with a hash that pushes my memory limits, although that seems like a rare situation -- a hash that does fit into memory, but adding an array of its keys would cross the line.

      Aaron B.
      My Woefully Neglected Blog, where I occasionally mention Perl.