in reply to Re^5: Finding the size of a nested hash in a HoH
in thread Finding the size of a nested hash in a HoH

Your test methodology seems broken to me.

  1. In the first script you are calling subroutines with_kets() and with_each() but they do not appear in the script?
  2. In the second script, if the keys test moves the process into swapping, then the each test will be equally affected.

And the output of GTop seems very muddled to me.

It is easy to verify that keys in a for loop creates a list of the keys (which there for consumes extra memory) by running this:

perl -E"my %h = 1..100; for my $k(keys %h){ undef %h; say scalar %h; s +ay $k }"

It constructs a hash, enters a for loop using keys, and then undefs the hash and display its (zero) on the first (and every) iteration. The for loop iterates through all the keys despite that the hash has been emptied. Therefore a separate list of the keys must have been constructed.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^7: Finding the size of a nested hash in a HoH
by remiah (Hermit) on Nov 11, 2011 at 10:27 UTC
    I forgot to paste functions! I saw only process's memory size, I cut other outputs of GTop.
    #!/usr/bin/perl use strict; use warnings; use GTop(); use Time::HiRes; my($gtop,$max,%h,@t); $gtop=new GTop; $t[0]=Time::HiRes::time(); printf "###count=$ARGV[1],$ARGV[0]###\n"; p("before"); $max=$ARGV[1]; %h=map { $_ => "test" } (1 .. $max); p("after hash"); if ($ARGV[0] eq "keys"){ &with_keys(); } elsif($ARGV[0] eq "each") { &with_each(); } else { print "else\n"; } p("after loop"); $t[1]=Time::HiRes::time(); printf "time=%.3f\n", ($t[1]-$t[0]); sub p { my ($cap)=@_; my $m=$gtop->proc_mem($$); #printf "$cap: size=%s,vsize=%s,resident=%s,share=%s,rss=%s\n" # ,$m->size,$m->vsize,$m->resident,$m->share,$m->rss; printf "%-10s: size=%s\n",$cap,$m->size; } sub with_keys{ foreach my $k (keys %h){ #no proc } } sub with_each{ while( my($k,$v)=each %h){ #no proc } }
    Output comes like this.
    ###count=10000,keys### before : size=8708096 after hash: size=11853824 after loop: size=11853824 time=0.044 ###count=10000,each### before : size=8708096 after hash: size=11853824 after loop: size=11853824 time=0.050 ###count=100000,keys### before : size=8708096 after hash: size=41213952 after loop: size=41213952 time=0.682 ###count=100000,each### before : size=8708096 after hash: size=41213952 after loop: size=41213952 time=0.805 ###count=1000000,keys### before : size=8708096 after hash: size=294969344 after loop: size=296017920 time=7.568 ###count=1000000,each### before : size=8708096 after hash: size=294969344 after loop: size=296017920 time=8.563 ###count=2000000,keys### before : size=8708096 after hash: size=581230592 after loop: size=582279168 time=104.976 ###count=2000000,each### before : size=8708096 after hash: size=581230592 after loop: size=582279168 time=225.191
    perl -E"my %h = 1..100; for my $k(keys %h){ undef %h; say scalar %h; s +ay $k }"

    I understood what this means. Count shows 0 but keys exists. It means there is separate list. I wonder why GTop's memory shows exactly same size between foreach and while? I mean "after loop: size" value in my output.

    As for second shell script, I will change it to first "each" and next "keys" test. And I'll add some sleep statment for swap problem. It takes some time, so I'll report it later. Thanks a lot for reply.

      I changed shell script like this.
      perl 025-1.pl each 10000 > log sleep 10; perl 025-1.pl keys 10000 >> log sleep 10; perl 025-1.pl each 100000 >> log sleep 10; perl 025-1.pl keys 100000 >> log sleep 10; perl 025-1.pl each 1000000 >> log sleep 10; perl 025-1.pl keys 1000000 >> log sleep 10; perl 025-1.pl each 2000000 >> log sleep 10; perl 025-1.pl keys 2000000 >> log
      And result seems to show me "each" is slower than "keys", especially when hash becomes larger.
      ###count=10000,each### before : size=8708096 after hash: size=11853824 after loop: size=11853824 time=0.051 ###count=10000,keys### before : size=8708096 after hash: size=11853824 after loop: size=11853824 time=0.043 ###count=100000,each### before : size=8708096 after hash: size=41213952 after loop: size=41213952 time=0.791 ###count=100000,keys### before : size=8708096 after hash: size=41213952 after loop: size=41213952 time=0.680 ###count=1000000,each### before : size=8708096 after hash: size=294969344 after loop: size=296017920 time=8.561 ###count=1000000,keys### before : size=8708096 after hash: size=294969344 after loop: size=296017920 time=7.429 ###count=2000000,each### before : size=8708096 after hash: size=581230592 after loop: size=582279168 time=309.887 ###count=2000000,keys### before : size=8708096 after hash: size=581230592 after loop: size=582279168 time=99.701

      My question is
      1. Why Gtop says while loop and foreach loop consume exact same memory?
      2. Why foreach/keys loop faster in this example? As another person kindly pointed me, Benchmark shows while/each loop is faster.

      updated I didn't understand the output of Benchmark... Benchmark also says foreach/keys loop is faster. But question remains. Why?

        Your benchmark is still fatally flawed.

        The major cost of using keys, is the requests for extra memory from the OS in order to build the list of keys. This is a one-off cost, that only occurs the first time the process requests that memory from the OS. By running your code in a loop (using Benchmark), you are amortising that costs over many re-runs, thereby reducing its significance. And by running both methods in the same benchmark, your are further amortising the penalty of that memory allocation over the each method runs which don't need it, thus further skewing the results.

        This is a Windows-specific benchmark (due to the use of tasklist.exe in memUsed()) but should be readily adaptable to *nix:

        On my system, for 1 million keys, for / keys required 56MB of extra memory and so is 10x slower than while / each:

        c:\test>each-keys-b -M1=keys -M2=pairs -N=1e6 hash built, starting timer Took 11.663000 and 56.145MB extra memory for 1000000 using keys/pairs +method c:\test>each-keys-b -M1=each -M2=pairs -N=1e6 hash built, starting timer Took 1.296000 and 0.020MB extra memory for 1000000 using each/pairs me +thod

        For 5 million keys, for / keys required 228MB of extra memory and so is 40x slower than while / each:

        c:\test>each-keys-b -M1=keys -M2=pairs -N=5e6 hash built, starting timer Took 280.350000 and 228.652MB extra memory for 5000000 using keys/pair +s method c:\test>each-keys-b -M1=each -M2=pairs -N=5e6 hash built, starting timer Took 6.613000 and 0.020MB extra memory for 5000000 using each/pairs me +thod

        For 8 million keys, for / keys required 340MB of extra memory and so is 80x slower than while / each:

        c:\test>each-keys-b -M1=keys -M2=pairs -N=8e6 hash built, starting timer Took 899.040000 and 343.402MB extra memory for 8000000 using keys/pair +s method c:\test>each-keys-b -M1=each -M2=pairs -N=8e6 hash built, starting timer Took 11.112000 and 0.023MB extra memory for 8000000 using each/pairs m +ethod

        And at these sizes of hash, my machine has not yet moved into swapping. When I retire tonight, I'll leave the machine running on 10 million keys which will cause swapping and I'll report the timings tomorrow.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.