http://qs1969.pair.com?node_id=11134811


in reply to Re^4: Using 'keys' on a list
in thread Using 'keys' on a list

I just tested my benchmark code to see if it is doing what I think that is doing.

My eyes can certainly be deceived, but why does generating the list of keys in the sub and passing copy of that list to the caller appear to be faster than passing a ref to the caller and having him generate this list his own?

Replies are listed 'Best First'.
Re^6: Using 'keys' on a list
by swl (Parson) on Jul 08, 2021 at 23:17 UTC

    In terms of the relative speeds, running the code through B::Concise is maybe helpful.

    I can't interpret the full details, but the code to return a list of keys has fewer operations and only one nextstate (so less bookkeeping?). Both have the same number of ops that are optimised away (those preceded by "ex-").

    Hopefully someone with better knowledge of the internals and B::Concise can shed some light.

    C:\path>perl -v | find "This is" This is perl 5, version 28, subversion 0 (v5.28.0) built for MSWin32-x +64-multi-thread
    C:\path>type return_hash_keys.pl use strict; use warnings; sub zort { my %hash = (a => 1); return keys %hash; }; my @x = zort(); #my $aa = $x[0]; C:\path>perl -MO=Concise,-src return_hash_keys.pl a <@> leave[1 ref] vKP/REFC ->(end) 1 <0> enter ->2 # 7: my @x = zort(); 2 <;> nextstate(main 6 return_hash_keys.pl:7) v:*,&,{,x*,x&,x$,$ - +>3 9 <2> aassign[t2] vKS/COM_AGG ->a - <1> ex-list lK ->7 3 <0> pushmark s ->4 6 <1> entersub lKS/STRICT ->7 - <1> ex-list lK ->6 4 <0> pushmark s ->5 - <1> ex-rv2cv sK/STRICT,1 ->- 5 <#> gv[IV \&main::zort] s ->6 - <1> ex-list lK ->9 7 <0> pushmark s ->8 8 <0> padav[@x:6,7] lRM*/LVINTRO ->9 return_hash_keys.pl syntax OK
    C:\path>type return_hash_ref.pl use strict; use warnings; sub zort { my %hash = (a => 1); return \%hash }; my $x = zort(); my @x = keys %$x; #my $aa = $x[0]; C:\path>perl -MO=Concise,-src return_hash_ref.pl g <@> leave[1 ref] vKP/REFC ->(end) 1 <0> enter ->2 # 7: my $x = zort(); 2 <;> nextstate(main 6 return_hash_ref.pl:7) v:*,&,{,x*,x&,x$,$ -> +3 7 <2> sassign vKS/2 ->8 5 <1> entersub sKS/STRICT ->6 - <1> ex-list sK ->5 3 <0> pushmark s ->4 - <1> ex-rv2cv sK/STRICT,1 ->- 4 <#> gv[IV \&main::zort] s ->5 6 <0> padsv[$x:6,8] sRM*/LVINTRO ->7 # 8: my @x = keys %$x; 8 <;> nextstate(main 7 return_hash_ref.pl:8) v:*,&,{,x*,x&,x$,$ -> +9 f <2> aassign[t6] vKS/COM_AGG ->g - <1> ex-list lK ->d 9 <0> pushmark s ->a c <1> keys[t5] lK/1 ->d b <1> rv2hv[t2] lKRM/STRICT ->c a <0> padsv[$x:6,8] sM/DREFHV ->b - <1> ex-list lK ->f d <0> pushmark s ->e e <0> padav[@x:7,8] lRM*/LVINTRO ->f return_hash_ref.pl syntax OK
      Hi

      I didn't read the full thread, so this might not be important.

      But please take also the overhead for allocating memory into account, especially with small arrays and hashes.

      It's a big difference if a container has already allocated much space and doesn't need to be expanded, like with variables from the outer scope or if it's newly created or if a list is pushed to the stack.

      Not that easy like just parsing the optree. :)

      Cheers Rolf
      (addicted to the Perl Programming Language :)
      Wikisyntax for the Monastery

        Yes, good point.

        For context, the code I'm feeding through B::Concise is a very simplified version of the original benchmark code (see 11134740 and 11134741). That code allocates very large hashes.

        The interesting point is that returning a huge list of keys, in list context, is faster than returning a reference to a hash followed by calling keys on a hash dereference. Ordinarily one would expect the latter to be faster than the former.