in reply to Re: removing duplicates from a string list
in thread removing duplicates from a string list

do you mean something like this
%ids; #loop through lines #add id to the list $ids{$my_input_id} = 'anything - doesn't matter'; #end loop @ids = keys %ids join(',' @ids);
or this
%ids; #loop through lines #add id to the list $ids{$my_input_id} = 'id'; #end loop @ids = values %ids join(',' @ids);
or does it make no difference?

Replies are listed 'Best First'.
Re^3: removing duplicates from a string list
by PeterPeiGuo (Hermit) on Nov 27, 2010 at 21:42 UTC

    Always use strict and warnings, so you would have to declare your variable with my.

    With a hash, the key is unique, but not neccessarily the values.

    Peter (Guo) Pei

Re^3: removing duplicates from a string list
by 7stud (Deacon) on Nov 27, 2010 at 22:25 UTC

    Because the value you assign to each key is unimportant, you might as well use undef to signify that:

    use warnings; use strict; use 5.010; my @arr = qw{ a a b b c c c }; my %hash; @hash{@arr} = undef; use Data::Dumper; say Dumper(\%hash); --output:-- $VAR1 = { 'c' => undef, 'a' => undef, 'b' => undef }; ----------- if (exists $hash{a} ) { say 'yes'; } else { say 'no'; } --output:-- yes ---------- for (keys %hash) { say; } --output:-- c a b

    If you are reading lines from a file you can reduce the amount of memory you use at any one time by assigning the lines to the hash one at a time--rather than storing the lines in an array and then doing a gang assignment like above.