in reply to Re: removing duplicates from an array of hashes
in thread removing duplicates from an array of hashes

Disadvantage is that delete works good on hashes, but "bad" (and deprecated) on arrays. It does not really delete an entry but just undefs it (with the exception when it is the last element(s), so it worked in the original example, but if you put e.g. to 'a'-ids at the start of the array, you'll see it). See also the Documention of delete.

Greetings,
Janek Schleicher

PS: I agree that the grep solution is the usual way. Was just interested to write an inplace algorithm, as sometimes that's usefull, too, when working with big data.
  • Comment on Re^2: removing duplicates from an array of hashes

Replies are listed 'Best First'.
Re^3: removing duplicates from an array of hashes
by NetWallah (Canon) on Apr 17, 2014 at 15:15 UTC
    Thanks for pointing out that "delete $array[$idx]" is deprecated, and generates undefs.

    Just to illustrate the subtle behaviour that was masked in my previous post, here is a demo of potential disaster the appearance of the undef could cause:

    perl -MData::Dumper -E 'my $r=[map { id=>$_ }, ("b","a".."c","b")]; say Dumper $r; my %h; $h{$r->[$_]{id}}++ and delete $r->[$_] for 0..$#$r; say Dumper $r' --- SECOND (Relevant) PART of OUTPUT--- $VAR1 = [ { 'id' => 'b' }, { 'id' => 'a' }, undef, { 'id' => 'c' } ];

            What is the sound of Perl? Is it not the sound of a wall that people have stopped banging their heads against?
                  -Larry Wall, 1992