in reply to Remove duplicated data from array ref

I wish I could to this without having to add the new values to a new variable.

There's no reason you can't do this with the code johngg provided:

c:\@Work\Perl>perl -wMstrict -MData::Dump -le "my $data = [ [ 123 ], [ 789 ], [ 'dup' ], [ 456 ], [ 123 ], [ 'dup' ], [ 543 ], ]; dd $data; ;; $data = do { my %seen; [ grep { not $seen{ $_->[0] }++ } @$data ]; }; dd $data; " [[123], [789], ["dup"], [456], [123], ["dup"], [543]] [[123], [789], ["dup"], [456], [543]]

The problem with this or any similar approach is that there will be a moment after the anonymous array
    [ grep { ... } @$data ]
is built and before its reference address is taken and assigned to  $data when two possibly very large arrays (and a hash!) will exist in memory and may exhaust your system memory. (I say "possibly" because you say nothing about your actual application.)

One way to ameliorate, but not, unfortunately, completely eliminate, this effect would be to make the input array unique "in place":

c:\@Work\Perl>perl -wMstrict -MData::Dump -le "my $data = [ [ 123 ], [ 789 ], [ 'dup' ], [ 456 ], [ 'dup' ], [ 123 ], [ 543 ], ]; dd $data; ;; my %seen; my $lo = 0; for (my $hi = 0; $hi <= $#$data; ) { ++$seen{ $data->[$lo][0] = $data->[$hi][0] }; ++$lo; ++$hi while $hi <= $#$data && $seen{ $data->[$hi][0] }; } $#$data = $lo-1; dd $data; " [[123], [789], ["dup"], [456], ["dup"], [123], [543]] [[123], [789], ["dup"], [456], [543]]
This leaves you with just one array to worry about in terms of memory consumption, but the hash still consumes memory, however temporarily.


Give a man a fish:  <%-{-{-{-<