http://qs1969.pair.com?node_id=11134821


in reply to Re^5: Using 'keys' on a list
in thread Using 'keys' on a list

In terms of the relative speeds, running the code through B::Concise is maybe helpful.

I can't interpret the full details, but the code to return a list of keys has fewer operations and only one nextstate (so less bookkeeping?). Both have the same number of ops that are optimised away (those preceded by "ex-").

Hopefully someone with better knowledge of the internals and B::Concise can shed some light.

C:\path>perl -v | find "This is" This is perl 5, version 28, subversion 0 (v5.28.0) built for MSWin32-x +64-multi-thread
C:\path>type return_hash_keys.pl use strict; use warnings; sub zort { my %hash = (a => 1); return keys %hash; }; my @x = zort(); #my $aa = $x[0]; C:\path>perl -MO=Concise,-src return_hash_keys.pl a <@> leave[1 ref] vKP/REFC ->(end) 1 <0> enter ->2 # 7: my @x = zort(); 2 <;> nextstate(main 6 return_hash_keys.pl:7) v:*,&,{,x*,x&,x$,$ - +>3 9 <2> aassign[t2] vKS/COM_AGG ->a - <1> ex-list lK ->7 3 <0> pushmark s ->4 6 <1> entersub lKS/STRICT ->7 - <1> ex-list lK ->6 4 <0> pushmark s ->5 - <1> ex-rv2cv sK/STRICT,1 ->- 5 <#> gv[IV \&main::zort] s ->6 - <1> ex-list lK ->9 7 <0> pushmark s ->8 8 <0> padav[@x:6,7] lRM*/LVINTRO ->9 return_hash_keys.pl syntax OK
C:\path>type return_hash_ref.pl use strict; use warnings; sub zort { my %hash = (a => 1); return \%hash }; my $x = zort(); my @x = keys %$x; #my $aa = $x[0]; C:\path>perl -MO=Concise,-src return_hash_ref.pl g <@> leave[1 ref] vKP/REFC ->(end) 1 <0> enter ->2 # 7: my $x = zort(); 2 <;> nextstate(main 6 return_hash_ref.pl:7) v:*,&,{,x*,x&,x$,$ -> +3 7 <2> sassign vKS/2 ->8 5 <1> entersub sKS/STRICT ->6 - <1> ex-list sK ->5 3 <0> pushmark s ->4 - <1> ex-rv2cv sK/STRICT,1 ->- 4 <#> gv[IV \&main::zort] s ->5 6 <0> padsv[$x:6,8] sRM*/LVINTRO ->7 # 8: my @x = keys %$x; 8 <;> nextstate(main 7 return_hash_ref.pl:8) v:*,&,{,x*,x&,x$,$ -> +9 f <2> aassign[t6] vKS/COM_AGG ->g - <1> ex-list lK ->d 9 <0> pushmark s ->a c <1> keys[t5] lK/1 ->d b <1> rv2hv[t2] lKRM/STRICT ->c a <0> padsv[$x:6,8] sM/DREFHV ->b - <1> ex-list lK ->f d <0> pushmark s ->e e <0> padav[@x:7,8] lRM*/LVINTRO ->f return_hash_ref.pl syntax OK

Replies are listed 'Best First'.
Re^7: Using 'keys' on a list
by LanX (Saint) on Jul 09, 2021 at 16:08 UTC
    Hi

    I didn't read the full thread, so this might not be important.

    But please take also the overhead for allocating memory into account, especially with small arrays and hashes.

    It's a big difference if a container has already allocated much space and doesn't need to be expanded, like with variables from the outer scope or if it's newly created or if a list is pushed to the stack.

    Not that easy like just parsing the optree. :)

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    Wikisyntax for the Monastery

      Yes, good point.

      For context, the code I'm feeding through B::Concise is a very simplified version of the original benchmark code (see 11134740 and 11134741). That code allocates very large hashes.

      The interesting point is that returning a huge list of keys, in list context, is faster than returning a reference to a hash followed by calling keys on a hash dereference. Ordinarily one would expect the latter to be faster than the former.

        I read your output from B::Concise with interest. I also don't know how to interpret the output. Memory allocation effort is a likely suspect in this mystery, but it is beyond me why the sub can do this easier than the caller? Maybe somehow generating a large hash somehow also causes some large block of memory also be allocated to make the keys operation more efficient? I dunno.

        As an aside of how I came to deal with a very large hash: I was using the TableMatrix Tk widget. The 2-D table coordinates are hash keys like "$row,$col". It is pretty easy to wind up with 80,000 keys representing a 2-D matrix that way. At the time I did a lot of benchmarking because I wanted the GUI performance to be adequate on my target machine which was a slow Win-XP laptop. I did a lot of benchmarking with pre-sizing the hash (like %hash = 8192 # num of buckets). I found out that although this was faster, it made no significant difference percentage wise in the total application CPU usage because the work done with the hash once it was created just dwarfed the effort to create it in the first place.

        To sort this thing by column, I flattened it out to a 2-D array, sorted via ST method, then re-did the hash key representation. This achieved my GUI performance goals at the time and I didn't pursue it further. Getting just the keys of a large hash has never come up in my programming although I can see where that could happen for perhaps an object method. Most times that I've used a large hash, it has always been important to know the values of the keys in addition to the keys (exceptions would be getting unique values or something like that).

        I guess unless some more experienced Monk can explain this, it will remain a weird quirky mystery? Clearly some optimization is happening. That optimization may have side effects that conceivably could even be detrimental? There always seems to be a "plus vs minus" for these things.