in reply to Re^3: Indepedent lazy iterators for the same hash?
in thread Indepedent lazy iterators for the same hash?

> I don't understand the point about real efficiency,

Maybe an example helps illustrating the point.

What is more efficient:

 for (1..1e6) { ... }

or

 @k=1..1e6; for (@k) {...} ?

> but if you want absolutely no dependencies,

I definitely need no dependency b/c this lazy iterator breaks if comparing identical hashes.

This seems like a trivially avoided case but, what if you working with deeply nested data-structures?

This effectively means:

NEVER use each within an iterator operating on a shared data structure, cause it has global side effects!

> you can always roll your own iterator.

I'm well aware, thats why asked how to avoid this.

Cheers Rolf

( addicted to the Perl Programming Language)

Replies are listed 'Best First'.
Re^5: Indepedent lazy iterators for the same hash?
by AnomalousMonk (Archbishop) on Jun 30, 2013 at 23:33 UTC
    What is more efficient:
        for (1..1e6) { ... }
    or
        @k=1..1e6; for (@k) {...} ?

    Yes, I see your point, but what I was getting at in asking about real efficiency is that it is my gut feeling (unsupported by any benchmarking: there is no real application to benchmark) that if you have a hash with up to, say, about a million keys, the time to copy those keys into a separate array (as in the example code) will be trivial in comparison the time needed to acutally do with those key/value pairs (as returned by the custom each) whatever it is you want to do with them.

    If you have more than 10 million keys in a hash, you're probably on the verge of moving everything into a database anyway.

    The 10-100 1-10 million key range would seem (again, my gut feeling here) to be where the question of run-time efficiency would come into play, but why cross that bridge before you come to it? (Or are you already standing on that bridge?)