in reply to Re^2: Indepedent lazy iterators for the same hash?
in thread Indepedent lazy iterators for the same hash?

... copying the keys in advance isn't really efficient... My hopes were not to rely on dependencies...

I don't understand the point about real efficiency, but if you want absolutely no dependencies, you can always roll your own iterator. This one simply stops when it's exhausted, but it's easy to imagine simulating the behavior of the each built-in when it reaches the end of iteration.

>perl -wMstrict -le "my %hash = qw( one uno two dos three tres four quatro five cinco ); sub my_each (\%) { my $hr = shift; my @k = keys %$hr; ;; return sub { return unless @k; my $k = shift @k; return $k, $hr->{$k}; } } ;; my $e1 = my_each %hash; my $e2 = my_each %hash; ;; print qq{e1: @{[ $e1->() ]}; @{[ $e1->() ]}.}; print qq{e2: @{[ $e2->() ]} \n}; ;; print qq{e1: @{[ $e1->() ]}; @{[ $e1->() ]}.}; print qq{e2: @{[ $e2->() ]} \n}; ;; print qq{e1: @{[ $e1->() ]}; @{[ $e1->() ]}.}; print qq{e2: @{[ $e2->() ]} \n}; ;; print qq{e1: @{[ $e1->() ]}; @{[ $e1->() ]}.}; print qq{e2: @{[ $e2->() ]} \n}; " e1: three tres; five cinco. e2: three tres e1: one uno; two dos. e2: five cinco e1: four quatro; . e2: one uno e1: ; . e2: two dos

Update: Changed the code example slightly to clarify the termination behavior of the iterator.

Replies are listed 'Best First'.
Re^4: Indepedent lazy iterators for the same hash?
by LanX (Saint) on Jun 30, 2013 at 21:32 UTC
    > I don't understand the point about real efficiency,

    Maybe an example helps illustrating the point.

    What is more efficient:

     for (1..1e6) { ... }

    or

     @k=1..1e6; for (@k) {...} ?

    > but if you want absolutely no dependencies,

    I definitely need no dependency b/c this lazy iterator breaks if comparing identical hashes.

    This seems like a trivially avoided case but, what if you working with deeply nested data-structures?

    This effectively means:

    NEVER use each within an iterator operating on a shared data structure, cause it has global side effects!

    > you can always roll your own iterator.

    I'm well aware, thats why asked how to avoid this.

    Cheers Rolf

    ( addicted to the Perl Programming Language)

      What is more efficient:
          for (1..1e6) { ... }
      or
          @k=1..1e6; for (@k) {...} ?

      Yes, I see your point, but what I was getting at in asking about real efficiency is that it is my gut feeling (unsupported by any benchmarking: there is no real application to benchmark) that if you have a hash with up to, say, about a million keys, the time to copy those keys into a separate array (as in the example code) will be trivial in comparison the time needed to acutally do with those key/value pairs (as returned by the custom each) whatever it is you want to do with them.

      If you have more than 10 million keys in a hash, you're probably on the verge of moving everything into a database anyway.

      The 10-100 1-10 million key range would seem (again, my gut feeling here) to be where the question of run-time efficiency would come into play, but why cross that bridge before you come to it? (Or are you already standing on that bridge?)