in reply to removing duplicates from an array of hashes

In-place TIMTOWTDI using 'delete', inspired by bigi (++):
perl -MData::Dumper -E 'my $r=[map { id=>$_ }, ("a".."c","b")]; say Dumper $r; my %h; $h{$r->[$_]{id}}++ and delete $r->[$_] for 0..$#$r; say Dumper $r'
IMHO, kcott's grep (++) is the cleanest, and classic/canonical.

        What is the sound of Perl? Is it not the sound of a wall that people have stopped banging their heads against?
              -Larry Wall, 1992

Replies are listed 'Best First'.
Re^2: removing duplicates from an array of hashes
by bigj (Monk) on Apr 17, 2014 at 06:41 UTC
    Disadvantage is that delete works good on hashes, but "bad" (and deprecated) on arrays. It does not really delete an entry but just undefs it (with the exception when it is the last element(s), so it worked in the original example, but if you put e.g. to 'a'-ids at the start of the array, you'll see it). See also the Documention of delete.

    Greetings,
    Janek Schleicher

    PS: I agree that the grep solution is the usual way. Was just interested to write an inplace algorithm, as sometimes that's usefull, too, when working with big data.
      Thanks for pointing out that "delete $array[$idx]" is deprecated, and generates undefs.

      Just to illustrate the subtle behaviour that was masked in my previous post, here is a demo of potential disaster the appearance of the undef could cause:

      perl -MData::Dumper -E 'my $r=[map { id=>$_ }, ("b","a".."c","b")]; say Dumper $r; my %h; $h{$r->[$_]{id}}++ and delete $r->[$_] for 0..$#$r; say Dumper $r' --- SECOND (Relevant) PART of OUTPUT--- $VAR1 = [ { 'id' => 'b' }, { 'id' => 'a' }, undef, { 'id' => 'c' } ];

              What is the sound of Perl? Is it not the sound of a wall that people have stopped banging their heads against?
                    -Larry Wall, 1992