Re: Perl leve control of hash iterators
by diotalevi (Canon) on Feb 28, 2007 at 02:08 UTC
|
No. You'd have to do this in XS and outside of the published API. Even when you're in XS the published API just tells you to reset the iterator prior to using it. I suppose what you could do is track which key you were currently at, let the iterator get perturbed then reset it and advance it til you find the same key again.
| [reply] |
|
|
Just to clarify (for all respondants) I'm investigating
a bug in Data::Rmap where the following
loops off forever:
perl -MData::Rmap=rmap_hash -le 'rmap_hash {print keys %{$_->{a}} } ({
+a=>{1,2}})'
I'm not certain, but I suspect that the keys call
interacts with this line, from Data::Rmap:
push @return, $self->_rmap($_) for values %$_;
Since the module is for general use, XS or tie-ing aren't
really options. Also, $_ is aliased during the callback
so that things, including hashes can be changed (sometimes
leading to different disasters). This further constrains
the solution space.
Thanks for your tips,
Brad
Update: I think autovivification is to blame
# added exists check
perl -MData::Rmap=rmap_hash -le 'rmap_hash {print keys %{$_->{a}} if e
+xists $_->{a} } ({a=>{1,2}})'
1
| [reply] [d/l] [select] |
|
|
| [reply] |
|
|
push @return, $self->_rmap($_) for values %$_;
Hmm, if you do this another way, you must be able to avoid the problem:
push @return, map { $self->_rmap($_) } values %$_;
Here, values is finished before the map even starts.
However... you won't have a problem in the original unless the values is traversed lazily. You never know... | [reply] [d/l] [select] |
Re: Perl level control of hash iterators
by ikegami (Patriarch) on Feb 28, 2007 at 04:01 UTC
|
If you end up wanting to make your own iterator, here's one:
sub make_hash_iterator {
my ($hash) = @_;
my @keys = keys %hash;
return sub {
return () if !@keys;
if (wantarray) {
my $key = shift(@keys);
return ($key, $hash->{$key});
} else {
$_[0] = shift(@keys);
return 1;
}
};
}
my $i = make_hash_iterator(\%hash);
while ($i->(my $key)) { # scalar context
...
}
my $i = make_hash_iterator(\%hash);
while (my ($key) = $i->()) { # list context
... # Will fail for '' and '0'
} # without parens around $key.
my $i = make_hash_iterator(\%hash);
while (my ($key, $val) = $i->()) {
...
}
| [reply] [d/l] |
Re: Perl level control of hash iterators
by dragonchild (Archbishop) on Feb 28, 2007 at 03:24 UTC
|
You can do it in XS and you can also do it by tying the hash, then providing FIRSTKEY and NEXTKEY. This is how, for instance, you can have a sorted hash. Look at Tie::Hash::Sorted (by our own Limbic~Region) for an example.
My criteria for good software:
- Does it work?
- Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
| [reply] |
|
|
| [reply] |
|
|
Ah, that satisfies my curiosity.
Thanks
| [reply] |
Re: Perl leve control of hash iterators
by rhesa (Vicar) on Feb 28, 2007 at 02:13 UTC
|
You can only achieve that by writing your own iterators. The documentation for each clearly states:
There is a single iterator for each hash, shared by all each, keys, and values function calls in the program; it can be reset by reading all the elements from the hash, or by evaluating keys HASH or values HASH . If you add or delete elements of a hash while you're iterating over it, you may get entries skipped or duplicated, so don't. | [reply] [d/l] [select] |
Re: Perl leve control of hash iterators
by GrandFather (Saint) on Feb 28, 2007 at 02:13 UTC
|
You may find the discussion in the node The Anomalous each() interesting (but probably not helpful).
DWIM is Perl's answer to Gödel
| [reply] |
Re: Perl level control of hash iterators
by andye (Curate) on Feb 28, 2007 at 13:04 UTC
|
Would it be cheating to copy the original hash into a new one, and iterate over that instead?
(I can see why this could be too inefficient for production code, but maybe useful for debugging, I'm thinking...)
best, andye | [reply] |
Re: Perl level control of hash iterators
by Moron (Curate) on Feb 28, 2007 at 18:44 UTC
|
Looking for the catch I searched the OP for any references to threads or forks. In the absence of a multiprocess environment I don't see the problem, e.g.:
my $href = \%hash;
my $aref = [];
@$aref = keys %hash;
for ( my $i = 0; $i <= $#$aref; $i++ ) {
# do something with $href -> { $aref -> [$i ] }
$href -> { $aref -> [$i] }{ XYZ }{ _callback } ( $href, $aref, $i
+);
# do more with $href -> { $aref -> [$i] }
}
| [reply] [d/l] |