in reply to A brain twister? (how to make 2 lines->1)

When I find myself being too clever (or find that I have been too clever in a previous version of code), I try to simplify it. If I don't, I find myself spending way too much time trying to understand the clever code I wrote yesterday (or last week / month / year), and wanting to go back and smack some sense into myself.

push @{$outp}, sprintf '"%s"=>"%s"', @$_ foreach @{&{ sub ($) { my $_=["",[]]; push @{$_->[1]},\@{$_->[0]} while @{$_->[0]=[each $_[0]]}; $_->[1] } } (\%ahash)}, @{$v};

There is a lot packed into that code example that could probably be made more understandable, at the very least, with some whitespace.

push @{$outp}, sprintf '"%s"=>"%s"', @$_ foreach @{&{ sub ($) { my $_=["",[]]; push @{$_->[1]}, \@{$_->[0]} while @{$_->[0]=[each $_[0]]}; $_->[1] } } (\%ahash)}, @{$v};

although, in this case, I would probably give it some more care and feeding than just whitespace. Specifically (assuming that I am understanding what you are attempting to do), I would look at using map (map { sprintf... } along with keys instead of the foreach structure that you have.

Assuming that I am understanding your intent (which I still may not have grokked), does this do what you are attempting?

push @{$outp}, ( map { sprintf '"%s"=>"%s"', $_, $ahash{$_}; } keys $ahash, ), @{$v};

Just my $0.02

--MidLifeXis

Replies are listed 'Best First'.
Re^2: A brain twister? (how to make 2 lines->1)
by perl-diddler (Chaplain) on Jun 27, 2012 at 09:09 UTC
    Your code generates correct output! (minus the @{$v} -- not sure how that got there...doesn't make any sense).

    And that may be the best solution I'm gonna be able to come up with for now..., but it isn't near as simple as 'join' for arrays... though I'm all but certain that the map is a better choice than the while loop.

    I was really wanting 'each' in place of keys, and in a list of lists context, it would generate something along the lines of ((k, v),(k2,v2)...), that I could then feed to map, and map would pull off one pair/ usage.

    Not saying it is possible, but for the work of the 'keys', an each doing the same would be of the same order as having an each that 'streams' kv pairs. -- I wanted to avoid the 2nd hash lookup ($ahash{$_}), as that makes the algorithm more like O(n^1.5), where keys and a streaming each would be of O(n).

    The 'keys' has already done the work of looping through the entire hash and thrown away the values part - which is regenerated by the $ahash{$_}... and was just trying to find a way to avoid that waste if it was possible.... a bit "arcane", but....it was a hope....

    I think my original solution might be closer to O(1), but the overhead of the while v. map would drown out any benefits even if it could be coerced into the right output...(order independent)...

    {"three"=>"3", "one"=>"1", "two"=>"2"}

      Hash lookups, if done right, are O(1) algorithms (see hash_table - thanks Ransom and possibly perldata (unverified) - thanks moritz). Since you are just looping through the list of keys (O(n)), looking up the hash table entry for that key (O(1)), and generating a string from the two values (O(1)), the order of the map should be O(n)*.

      Performance is another issue entirely, but unless you are profiling the two implementations, and the speed performance boost is appreciable and worth the additional complexity, go with the implementation that is simpler to understand.

      * - assuming that I am remembering my undergrad coursework properly.

      Update: used performance instead of speed since you may not be optimizing for speed, but some other metric.

      --MidLifeXis

        One of the reasons perl went to randomizing hashes was people were deliberately using bad hash data to create DOS's ... HASH's with the wrong data become O(n) -- slightly worse than linear search due to the overhead, but in the IDEAL case, they are O(1). That's why I ***thought*** on the average, they'd grow at ~ the square root of n, but it's largely dependent on # keys and size of hash table. When #keys ~<=~ 70% hash table size, you get near O(1)...

        wikipedia says it is possible with a self-balancing hash tree (brain hurts just thinking about intermixing one of those with a hash tree...) to reduce worst case to O(log n), but its not usually worth the tradeoff in algorithmic complexity -- sorta like me trying to improve on 'map' and 1 hash lookup... ;-)