Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

The Cost of Nested Referencing

by shotgunefx (Parson)
on Nov 15, 2002 at 12:13 UTC ( [id://213125]=perlmeditation: print w/replies, xml ) Need Help??

I often use very deeply nested structures in my code. This can make for ugly code. $self->{thiskey}{theykey}{etc}="etc..."
I've been going back and cleaning up some older code by taking a ref to the HoHoHoHo..s and it got me to thinking about hashes in general( That and reading perlguts).

I always assumed that when you said $hash{'constant_key'}->{key2}, Perl would optimize this on compile and not keep fetching $hash{'constant_key'} after it first encountered it.

I thought wrong.
It would appear that using a reference to the desired "Hash in a Hash" has a fairly large performance implication. This very well may be a "Duh!" for others but for me this was very eye opening. Using a reference to a HoH speeds things up on average about 15%! A more practical test (HoH) bears this out. Not a bad speedup. On HoHoH it averages about 35% faster. The deeper the reference, the bigger the payoff.

I don't do a lot of benchmarking so if someone sees a flaw in my method, I'd appreciate feedback.

Lesson: So not only does using refs to nested data structures make your code easier to read, it makes it a good bit faster.
#!/usr/bin/perl use strict; use warnings; use Benchmark qw(cmpthese timethese); my $char = 'a'; my %hash = map {$char++.$_ => $_} (1..50); my %HoH = map {$char++.$_ => {%hash} } (1..20); my %HoHoH = map {$char++.$_ => {%HoH} } (1..50); my ($ref,$ref1,$ref2,$ref3,$ref4); $ref1->{key1} = {%hash}; $ref2->{key1}{key2} = {%hash}; $ref3->{key1}{key2}{key3} = {%hash}; $ref4->{key1}{key2}{key3}{key4} = {%hash}; my ($shortref,$shortref1,$shortref2,$shortref3,$shortref4) = ( \%hash, $ref1->{key1}, $ref2->{key1}{key2}, $ref3->{key1}{key2}{key3}, $ref4->{key1}{key2}{key3}{key4}, ); my $iterations = 10000; my @benchmarks = ( {count=> $iterations, hash=>sub { foreach my $k ( keys %ha +sh ) { $hash{$k}++ } }, ref=>sub { foreach my $k ( keys %{$shortref} ) { $shortre +f->{$k}++ } } }, {count=> $iterations, ref2hash1=>sub { foreach my $k ( key +s %{$ref1->{key1}}) { $ref1->{key1}{$k}++ }}, shortref1=>sub { foreach my $k ( keys %{$shortref1}) { $s +hortref1->{$k}++ }} }, {count=> $iterations, ref2hash2=>sub { foreach my $k ( key +s %{$ref2->{key1}{key2}}) { $ref2->{key1}{key2}{$k}++ }}, shortref2=>sub { foreach my $k ( keys %{$shortref2}) { $s +hortref2->{$k}++ }} }, {count=> $iterations, ref2hash3=>sub { foreach my $k ( key +s %{$ref3->{key1}{key2}{key3}}) { $ref3->{key1}{key2}{key3}{$k}++ }}, shortref3=>sub { foreach my $k ( keys %{$shortref3}) { $s +hortref3->{$k}++ }} }, {count=> $iterations, ref2hash4=>sub { foreach my $k ( key +s %{$ref4->{key1}{key2}{key3}{key4}}) { $ref4->{key1}{key2}{key3}{key +4}{$k}++ }}, shortref4=>sub { foreach my $k ( keys %{$shortref4}) { $s +hortref4->{$k}++ }} }, { count=>2000, HoH =>sub { foreach my $k (keys %HoH) { foreach (keys %{ $HoH{$k} } ){ $HoH{$k}->{$_}++; } } }, refHoH =>sub { foreach my $k (keys %HoH) { my $hashref = $HoH{$k}; foreach (keys %{ $hashref } ){ $hashref->{$_}++; } } }, }, { count=>100, HoHoH =>sub { # Traverse foreach my $k (keys %HoHoH) { foreach my $k2 (keys %{ $HoHoH{$k} } +){ foreach (keys %{ $HoHoH{$k}->{$k2} + } ){ $HoHoH{$k}{$k2}->{$_}++; } } } }, refHoHoH =>sub { # Traverse foreach my $k (keys %HoHoH) { my $hashref = $HoHoH{$k}; foreach (keys %{ $hashref } ){ my $hashref2 = $hashref->{$_}; foreach (keys %{$hashref2} ){ $hashref2->{$_}++; } } } }, }, ); foreach my $bench (@benchmarks){ my $results = timethese(delete $bench->{count} ,$bench); print "-" x 20,"\n"; cmpthese($results); print "-" x 40,"\n\n"; } __END__ Benchmark: timing 10000 iterations of hash, ref... hash: 2 wallclock secs ( 2.06 usr + 0.00 sys = 2.06 CPU) @ 48 +54.37/s (n=10000) ref: 2 wallclock secs ( 2.18 usr + 0.00 sys = 2.18 CPU) @ 45 +87.16/s (n=10000) -------------------- Rate ref hash ref 4587/s -- -6% hash 4854/s 6% -- ---------------------------------------- Benchmark: timing 10000 iterations of ref2hash1, shortref1... ref2hash1: 3 wallclock secs ( 2.47 usr + 0.00 sys = 2.47 CPU) @ 40 +48.58/s (n=10000) shortref1: 3 wallclock secs ( 2.20 usr + 0.00 sys = 2.20 CPU) @ 45 +45.45/s (n=10000) -------------------- Rate ref2hash1 shortref1 ref2hash1 4049/s -- -11% shortref1 4545/s 12% -- ---------------------------------------- Benchmark: timing 10000 iterations of ref2hash2, shortref2... ref2hash2: 3 wallclock secs ( 2.90 usr + 0.00 sys = 2.90 CPU) @ 34 +48.28/s (n=10000) shortref2: 2 wallclock secs ( 2.14 usr + 0.00 sys = 2.14 CPU) @ 46 +72.90/s (n=10000) -------------------- Rate ref2hash2 shortref2 ref2hash2 3448/s -- -26% shortref2 4673/s 36% -- ---------------------------------------- Benchmark: timing 10000 iterations of ref2hash3, shortref3... ref2hash3: 3 wallclock secs ( 3.35 usr + 0.00 sys = 3.35 CPU) @ 29 +85.07/s (n=10000) shortref3: 3 wallclock secs ( 2.23 usr + 0.00 sys = 2.23 CPU) @ 44 +84.30/s (n=10000) -------------------- Rate ref2hash3 shortref3 ref2hash3 2985/s -- -33% shortref3 4484/s 50% -- ---------------------------------------- Benchmark: timing 10000 iterations of ref2hash4, shortref4... ref2hash4: 4 wallclock secs ( 3.58 usr + 0.00 sys = 3.58 CPU) @ 27 +93.30/s (n=10000) shortref4: 2 wallclock secs ( 2.16 usr + 0.00 sys = 2.16 CPU) @ 46 +29.63/s (n=10000) -------------------- Rate ref2hash4 shortref4 ref2hash4 2793/s -- -40% shortref4 4630/s 66% -- ---------------------------------------- Benchmark: timing 2000 iterations of HoH, refHoH... HoH: 16 wallclock secs (10.01 usr + 0.00 sys = 10.01 CPU) @ 19 +9.80/s (n=2000) refHoH: 9 wallclock secs ( 8.98 usr + 0.00 sys = 8.98 CPU) @ 22 +2.72/s (n=2000) -------------------- Rate HoH refHoH HoH 200/s -- -10% refHoH 223/s 11% -- ---------------------------------------- Benchmark: timing 100 iterations of HoHoH, refHoHoH... HoHoH: 30 wallclock secs (28.75 usr + 0.01 sys = 28.76 CPU) @ 3 +.48/s (n=100) refHoHoH: 24 wallclock secs (22.49 usr + 0.00 sys = 22.49 CPU) @ 4 +.45/s (n=100) -------------------- Rate HoHoH refHoHoH HoHoH 3.48/s -- -22% refHoHoH 4.45/s 28% -- ----------------------------------------


-Lee

"To be civilized is to deny one's nature."

Replies are listed 'Best First'.
Re: The Cost of Nested Referencing
by Abigail-II (Bishop) on Nov 15, 2002 at 13:46 UTC
    I don't agree code with multiple indices is necessary hard to read, and using an intermediate reference is clearer. Something like:
    $data {$host} {$user} {$process} += $time_used;
    is in my opinion far more clear than:
    $host_ref = $data {$host}; $user_host_ref = $host_ref -> {$user}; $proc_user_host_ref = $user_host_ref -> {$process}; $proc_user_host_ref += $time_used;
    The former clearly shows you are collecting data, per host, per user and per process. The latter is just a mess, and you quickly run out of sensible variable names.

    If you have cases where $var {key1} {key2} {key3} becomes unclear, you either have to redesign your datastructure, or need to find better key or variable names.

    That using a reference to an inner structure is a win in your benchmark is clear, as you don't have to redo some calculations. But you cannot do that always - you can only do that if you access the same keys repeatedly. Often, the keys used are variable, and will differ from iteration to iteration.

    Abigail

      I don't think they are necessarily hard to read at all.

      but...
      foreach my $k ( sort { $self->{state}{inputs}{expected}->{$a} <=> $ +self->{state}{inputs}{expected}->{$b} } keys %{ $self->{state}{inputs +}{expected} } ){ # do stuff }
      can be a little hard on the eyes. I'm certainly not advocating that everyone use refs when working with HoH's. The main point of the post was that going deep gets slow quick and that I'm suprised that in certain cases, it isn't optimized away. Perl seems to do everything else for me so I was suprised.

      On occasion, I've used hashes simply as a way to group variables.
      %hash =( headers=>{data}, data=>{ "huge hash" } );
      One such instance was a quick and dirty search tool. It was a HoH with a huge amount of entries. If by using a simple assignment, I can get a 15% speed up when iterating through large data sets, I think it's worth doing.

      "That using a reference to an inner structure is a win in your benchmark is clear, as you don't have to redo some calculations. But you cannot do that always - you can only do that if you access the same keys repeatedly. Often, the keys used are variable, and will differ from iteration to iteration."

      I'm not sure what you mean by this point. Care to clarify?

      Thanks,
      -Lee

      "To be civilized is to deny one's nature."
      update
      Do you mean $h{$thiskey}{$thatkey} ?, but even then, orderly traversals are a fairly common thing, so in those cases where time is an issue, I still think it's a good thing to know.
        My post contains an example of using variables as keys.

        Abigail

      on linear code like that it may make things hader to read, but I don't see what's wrong with

      my %hash = ( ..... ); foreach my $key1 (@keyset1) { my $inner = $hash{$key1}; foreach my $key2 (@keyset2) { print $inner->{$key2}; } }

      instead of

      my %hash = ( ..... ); foreach my $key1 (@keyset1) { foreach my $key2 (@keyset2) { print $hash{$key1}{$key2}; } }

      That doesn't sacrifice too much in readability, I think

        They're both far too verbose.
        for my $subhash (@hash{@keyset1}) { print $subhash->{$_} for @keyset2; }
        or if you really only have a single statement in there, even print @{$_}{@keyset2} for @hash{@keyset1}; You can be efficient and readable all at the same time.

        Makeshifts last the longest.

        Beauty is in the eye of the beholder, but I find the latter far more readable that the former.

        Abigail

Re: The Cost of Nested Referencing
by gjb (Vicar) on Nov 15, 2002 at 13:05 UTC

    In general I take the fact that complicated data structures start to occur in code as a sign that it's time to introduce some classes (refactoring term: the code smells ;-) Usually the structure actually reflects the relationships between various objects (that can contain other objects, etc.).

    Regardless whether or not this results in a speed up of the code, introducing OO definitely leads to clearer and cleaner code.

    Just my 2 cents, -gjb-

      I agree for the most part, but sometimes you have to go for speed and sometimes, it's because it's not worth all the extra coding overhead. Most of these $self->{key}{subkey} items (in my code anyway) are because they satisfy my requirements, they are not sufficiently clustered to be considered their own object (I don't write OO for OO sake), or the number of times it will be accessed would make the extra level of indirection unacceptable performance-wise.

      -Lee

      "To be civilized is to deny one's nature."
      update: fixed typo.

        Using Class::MethodMaker minimizes this effort of course.

        Of course, you're perfectly right that not everything maps to clases/objects.

        Regards, -gjb-

Re: The Cost of Nested Referencing
by princepawn (Parson) on Nov 15, 2002 at 16:29 UTC
    You might want to look at Alias, which was designed to do what you are doing with references. It's not better, just an alternative.

    Carter's compass: I know I'm on the right track when by deleting something, I'm adding functionality

Re: The Cost of Nested Referencing
by Aristotle (Chancellor) on Nov 16, 2002 at 01:25 UTC
    The way hashes work, Perl cannot optimize $hash{constant_key} for you. There was an article on Perl.com which explained them recently; read it and think about it a bit.

    Makeshifts last the longest.

      I don't know if there isn't any way such an optimization would be possible in certain cases, but it's only useful in certain contexts so probably wouldn't be worth it to implement it into perl itself.

      I guess I assumed it did this optimizing because it can't detect nested assignments in TIEs. I assumed it went right to the nested reference. Since it appears to hit every key, I am a bit perplexed as to why it cannot catch nested stores and retrieves, but that's a different problem for another day.

      -Lee

      "To be civilized is to deny one's nature."

        It could do that, if it was smart enough to look around that far in the code. In reality, the optimizer is really rather dumb. If you say perl -MO=Deparse,-x7 you'll get to see exactly what the compiler has done to your code, the -x7 switch tells B::Deparse not to rearrange constructs in order to prettify them. The code you see in in that dump will be exactly what perl is going to execute; anything that's mentioned twice there will actually be done twice. Poke around and you'll see that the compiler's optimizer is really quite limited and mainly implements a few highly specialized shortcuts.

        The reason the compiler can't set lookups to constant keys in stone is that the hash itself is not set in stone, and as you add or delete keys, things shift about. I suppose at least some shortcuts might still be possible, but, well - given the general level of ignorance of the optimizer..

        Makeshifts last the longest.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://213125]
Approved by valdez
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others perusing the Monastery: (3)
As of 2024-03-19 08:14 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found