John M. Dlugosz has asked for the wisdom of the Perl Monks concerning the following question:

What is an easier/better way to do this:
foreach my $key (keys %SVGcal::styles) { $styles{$key} = $SVGcal::styles{$key} }
I want to take all the stuff from hash %B and copy them into hash %A. I expect any duplicates to be true duplicates by key name, so the behavior (overwrite or ignore) does not matter.

—John

Replies are listed 'Best First'.
Re: Combining a hash into another
by ikegami (Patriarch) on May 27, 2008 at 04:34 UTC

    What's wrong with your solution? Another:

    %styles = ( %styles, %SVGcal::styles );

    And another:

    @styles{ keys %SVGcal::styles } = values %SVGcal::styles;

    You could also use each if memory is an issue:

    while ( my ($k,$v) = each %SVGcal::styles ) { $styles{$k} = $v; }

    Update: Added two other methods.

      Re: What's wrong with your solution?

      Mentioning the hash twice, and coding an explicit loop.

      The first one you mention seems to be inherently inefficient, unraveling both into lists and rebuilding. I was wondering if there was already a neat idiom that updated in place.

      I never remember how to "slice" since I do it so seldom; doing it the boooring way was just expedient to getting the code working. So thanks for posting your examples. --John

Re: Combining a hash into another
by pc88mxer (Vicar) on May 27, 2008 at 04:37 UTC
    And there's always this:
    %styles = (%styles, %SVGcal::styles);
    How big is %styles? Maybe we should overload .= for hashes:
    %styles .= %SVGcal::styles;
    or should it be +=?

      Maybe ,= or push I guess.

Re: Combining a hash into another
by dragonchild (Archbishop) on May 27, 2008 at 12:57 UTC
    Hash::Merge

    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
Re: Combining a hash into another
by hangon (Deacon) on May 27, 2008 at 21:01 UTC

    If speed matters, I benchmarked this once for a project. In most cases using hash slices was significantly faster. Info from my notes below. The results and dataset generator are in the readmore link:

    # benchmark four ways to combine hashes: cmpthese (-10,{ forloop => sub { for (keys %$hash2) { $hash1->{$_} = $hash2->{$_} } }, whileach => sub { while (my($key, $val) = each %$hash2) { $hash1->{$key} = $val; } }, slice => sub { @$hash1{keys %$hash2} = values %$hash2; }, merge => sub { my $hash1 = {%$hash1, %$hash2}; }, },'auto');

    Updated:See reply to ikegami below.

      hash1: 10000 keys, hash2: 10000 keys duplicates 9000, added 1000 key/values slice 854559/s hash1 = 1000 keys, hash2 = 200 keys: duplicates 100 added 100 key/values slice 2673/s

      Why does reducing the work slow things down?

        Oops, my bad. Thanks for catching that ikegami. The info was from benchmark notes a few years old, so I don't really know what happened.

        My best guess is that the benchmarks shown for 10,000 key hashes were actually done on empty hashes. I ran those again and updated the post (also checked the other results). From this, the while-each and foreach loops appear to gain the speed advantage as the hashes get larger.