in reply to Re: A brain twister? (how to make 2 lines->1)
in thread A brain twister? (how to make 2 lines->1)

Your code generates correct output! (minus the @{$v} -- not sure how that got there...doesn't make any sense).

And that may be the best solution I'm gonna be able to come up with for now..., but it isn't near as simple as 'join' for arrays... though I'm all but certain that the map is a better choice than the while loop.

I was really wanting 'each' in place of keys, and in a list of lists context, it would generate something along the lines of ((k, v),(k2,v2)...), that I could then feed to map, and map would pull off one pair/ usage.

Not saying it is possible, but for the work of the 'keys', an each doing the same would be of the same order as having an each that 'streams' kv pairs. -- I wanted to avoid the 2nd hash lookup ($ahash{$_}), as that makes the algorithm more like O(n^1.5), where keys and a streaming each would be of O(n).

The 'keys' has already done the work of looping through the entire hash and thrown away the values part - which is regenerated by the $ahash{$_}... and was just trying to find a way to avoid that waste if it was possible.... a bit "arcane", but....it was a hope....

I think my original solution might be closer to O(1), but the overhead of the while v. map would drown out any benefits even if it could be coerced into the right output...(order independent)...

{"three"=>"3", "one"=>"1", "two"=>"2"}

Replies are listed 'Best First'.
Re^3: A brain twister? (how to make 2 lines->1)
by MidLifeXis (Monsignor) on Jun 27, 2012 at 13:18 UTC

    Hash lookups, if done right, are O(1) algorithms (see hash_table - thanks Ransom and possibly perldata (unverified) - thanks moritz). Since you are just looping through the list of keys (O(n)), looking up the hash table entry for that key (O(1)), and generating a string from the two values (O(1)), the order of the map should be O(n)*.

    Performance is another issue entirely, but unless you are profiling the two implementations, and the speed performance boost is appreciable and worth the additional complexity, go with the implementation that is simpler to understand.

    * - assuming that I am remembering my undergrad coursework properly.

    Update: used performance instead of speed since you may not be optimizing for speed, but some other metric.

    --MidLifeXis

      One of the reasons perl went to randomizing hashes was people were deliberately using bad hash data to create DOS's ... HASH's with the wrong data become O(n) -- slightly worse than linear search due to the overhead, but in the IDEAL case, they are O(1). That's why I ***thought*** on the average, they'd grow at ~ the square root of n, but it's largely dependent on # keys and size of hash table. When #keys ~<=~ 70% hash table size, you get near O(1)...

      wikipedia says it is possible with a self-balancing hash tree (brain hurts just thinking about intermixing one of those with a hash tree...) to reduce worst case to O(log n), but its not usually worth the tradeoff in algorithmic complexity -- sorta like me trying to improve on 'map' and 1 hash lookup... ;-)