In some code I reviewed for a colleague was a hash defined something like the following (a simplified version) :
my %fred = ( [1, 2, 3] => [1, 1, 0], [3, 4, 5] => [0, 1, 0], [0, 2, 4] => [1, 2, 1], ::: );
The code accesses this using the array references as a key, and as the "key" arrays are all distinct these references must be unique. So no problem there.
But the code later uses another hash constructed by calling reverse() on the above, and it seems to me that may be problematic if any of the anonymous "value" arrays are equal, which in the circs it appears they may well be, because perl may be clever enough to spot identical value arrays and use one copy for any such subsets, in which case reverse() would lose all duplicate entries at random.
Even if perl does not do that today, who is to say it won't start being done in some future version?
My colleague who wrote the code assures me this problem won't arise because (in his words) "the hash values are references to arrays of values, not the array values themselves". But I'm not convinced - If more than one identical value array shares the same location, then of course their references will be the same.
Any ideas?
Regards
John R Ramsden
In reply to Allocation of anonymous arrays by OwlHoot
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |