in reply to Re^6: Finding the size of a nested hash in a HoH
in thread Finding the size of a nested hash in a HoH

I forgot to paste functions! I saw only process's memory size, I cut other outputs of GTop.
#!/usr/bin/perl use strict; use warnings; use GTop(); use Time::HiRes; my($gtop,$max,%h,@t); $gtop=new GTop; $t[0]=Time::HiRes::time(); printf "###count=$ARGV[1],$ARGV[0]###\n"; p("before"); $max=$ARGV[1]; %h=map { $_ => "test" } (1 .. $max); p("after hash"); if ($ARGV[0] eq "keys"){ &with_keys(); } elsif($ARGV[0] eq "each") { &with_each(); } else { print "else\n"; } p("after loop"); $t[1]=Time::HiRes::time(); printf "time=%.3f\n", ($t[1]-$t[0]); sub p { my ($cap)=@_; my $m=$gtop->proc_mem($$); #printf "$cap: size=%s,vsize=%s,resident=%s,share=%s,rss=%s\n" # ,$m->size,$m->vsize,$m->resident,$m->share,$m->rss; printf "%-10s: size=%s\n",$cap,$m->size; } sub with_keys{ foreach my $k (keys %h){ #no proc } } sub with_each{ while( my($k,$v)=each %h){ #no proc } }
Output comes like this.
###count=10000,keys### before : size=8708096 after hash: size=11853824 after loop: size=11853824 time=0.044 ###count=10000,each### before : size=8708096 after hash: size=11853824 after loop: size=11853824 time=0.050 ###count=100000,keys### before : size=8708096 after hash: size=41213952 after loop: size=41213952 time=0.682 ###count=100000,each### before : size=8708096 after hash: size=41213952 after loop: size=41213952 time=0.805 ###count=1000000,keys### before : size=8708096 after hash: size=294969344 after loop: size=296017920 time=7.568 ###count=1000000,each### before : size=8708096 after hash: size=294969344 after loop: size=296017920 time=8.563 ###count=2000000,keys### before : size=8708096 after hash: size=581230592 after loop: size=582279168 time=104.976 ###count=2000000,each### before : size=8708096 after hash: size=581230592 after loop: size=582279168 time=225.191
perl -E"my %h = 1..100; for my $k(keys %h){ undef %h; say scalar %h; s +ay $k }"

I understood what this means. Count shows 0 but keys exists. It means there is separate list. I wonder why GTop's memory shows exactly same size between foreach and while? I mean "after loop: size" value in my output.

As for second shell script, I will change it to first "each" and next "keys" test. And I'll add some sleep statment for swap problem. It takes some time, so I'll report it later. Thanks a lot for reply.

Replies are listed 'Best First'.
Re^8: Finding the size of a nested hash in a HoH
by remiah (Hermit) on Nov 11, 2011 at 11:53 UTC
    I changed shell script like this.
    perl 025-1.pl each 10000 > log sleep 10; perl 025-1.pl keys 10000 >> log sleep 10; perl 025-1.pl each 100000 >> log sleep 10; perl 025-1.pl keys 100000 >> log sleep 10; perl 025-1.pl each 1000000 >> log sleep 10; perl 025-1.pl keys 1000000 >> log sleep 10; perl 025-1.pl each 2000000 >> log sleep 10; perl 025-1.pl keys 2000000 >> log
    And result seems to show me "each" is slower than "keys", especially when hash becomes larger.
    ###count=10000,each### before : size=8708096 after hash: size=11853824 after loop: size=11853824 time=0.051 ###count=10000,keys### before : size=8708096 after hash: size=11853824 after loop: size=11853824 time=0.043 ###count=100000,each### before : size=8708096 after hash: size=41213952 after loop: size=41213952 time=0.791 ###count=100000,keys### before : size=8708096 after hash: size=41213952 after loop: size=41213952 time=0.680 ###count=1000000,each### before : size=8708096 after hash: size=294969344 after loop: size=296017920 time=8.561 ###count=1000000,keys### before : size=8708096 after hash: size=294969344 after loop: size=296017920 time=7.429 ###count=2000000,each### before : size=8708096 after hash: size=581230592 after loop: size=582279168 time=309.887 ###count=2000000,keys### before : size=8708096 after hash: size=581230592 after loop: size=582279168 time=99.701

    My question is
    1. Why Gtop says while loop and foreach loop consume exact same memory?
    2. Why foreach/keys loop faster in this example? As another person kindly pointed me, Benchmark shows while/each loop is faster.

    updated I didn't understand the output of Benchmark... Benchmark also says foreach/keys loop is faster. But question remains. Why?

      Your benchmark is still fatally flawed.

      The major cost of using keys, is the requests for extra memory from the OS in order to build the list of keys. This is a one-off cost, that only occurs the first time the process requests that memory from the OS. By running your code in a loop (using Benchmark), you are amortising that costs over many re-runs, thereby reducing its significance. And by running both methods in the same benchmark, your are further amortising the penalty of that memory allocation over the each method runs which don't need it, thus further skewing the results.

      This is a Windows-specific benchmark (due to the use of tasklist.exe in memUsed()) but should be readily adaptable to *nix:

      #! perl -slw use strict; use Time::HiRes qw[ time ]; sub memUsed { my( $mem ) = `tasklist /nh /fi "PID eq $$"` =~ m[([0-9,]+) K]; $mem =~ tr[,][]d; return $mem / 1024; } our $M1 //= 'each'; our $M2 //= 'pairs'; our $N //= 1e6; my %hash; $hash{ sprintf "KEY%010d", $_ } = $_ for 1 .. $N; my $startMem = memUsed; print "hash built, starting timer"; my $start = time; my $count = 0; if( $M1 eq 'each' ) { if( $M2 eq 'pairs' ) { while( my( $k, $v ) = each %hash ) { ++$count; } } else { while( my $k = each %hash ) { ++$count; } } } else { if( $M2 eq 'pairs' ) { for my $k ( keys %hash ) { my $v = $hash{ $k }; ++$count; } } else { for my $k ( keys %hash ) { ++$count; } } } my $endMem = memUsed; printf "Took %.6f and %.3fMB extra memory for $count using $M1/$M2 met +hod\n" , time() - $start, $endMem - $startMem;

      On my system, for 1 million keys, for / keys required 56MB of extra memory and so is 10x slower than while / each:

      c:\test>each-keys-b -M1=keys -M2=pairs -N=1e6 hash built, starting timer Took 11.663000 and 56.145MB extra memory for 1000000 using keys/pairs +method c:\test>each-keys-b -M1=each -M2=pairs -N=1e6 hash built, starting timer Took 1.296000 and 0.020MB extra memory for 1000000 using each/pairs me +thod

      For 5 million keys, for / keys required 228MB of extra memory and so is 40x slower than while / each:

      c:\test>each-keys-b -M1=keys -M2=pairs -N=5e6 hash built, starting timer Took 280.350000 and 228.652MB extra memory for 5000000 using keys/pair +s method c:\test>each-keys-b -M1=each -M2=pairs -N=5e6 hash built, starting timer Took 6.613000 and 0.020MB extra memory for 5000000 using each/pairs me +thod

      For 8 million keys, for / keys required 340MB of extra memory and so is 80x slower than while / each:

      c:\test>each-keys-b -M1=keys -M2=pairs -N=8e6 hash built, starting timer Took 899.040000 and 343.402MB extra memory for 8000000 using keys/pair +s method c:\test>each-keys-b -M1=each -M2=pairs -N=8e6 hash built, starting timer Took 11.112000 and 0.023MB extra memory for 8000000 using each/pairs m +ethod

      And at these sizes of hash, my machine has not yet moved into swapping. When I retire tonight, I'll leave the machine running on 10 million keys which will cause swapping and I'll report the timings tomorrow.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        Replacing tasklist.exe with ps, your code worked with my machine and it shows while/each is faster and smaller.
        1. About Speed
        I noticed from your code, I was very unfair to while/each because I lack value assignment with foreach/keys test function. When I change Benchmark test function from
        foreach my $k (keys %h){ }
        to
        foreach my $k (keys %h){ my $v=$h{$k}; }
        , benchmark shows while/each becomes faster
        2. About memory
        With ps command, while/each loop shows very very small memory usage. Memory allocation of "keys" is significant.
        $./025-6.pl -M1=keys -M2=pairs -N=1e6 hash built, starting timer Took 3.931463 and 39.000MB extra memory for 1000000 using keys/pairs m +ethod $./025-6.pl -M1=each -M2=pairs -N=1e6 hash built, starting timer Took 2.434757 and 1.000MB extra memory for 1000000 using each/pairs me +thod
        It seems I miss something with GTop.