Because the value you assign to each key is unimportant, you might as well use undef to signify that:
use warnings; use strict; use 5.010; my @arr = qw{ a a b b c c c }; my %hash; @hash{@arr} = undef; use Data::Dumper; say Dumper(\%hash); --output:-- $VAR1 = { 'c' => undef, 'a' => undef, 'b' => undef }; ----------- if (exists $hash{a} ) { say 'yes'; } else { say 'no'; } --output:-- yes ---------- for (keys %hash) { say; } --output:-- c a b
If you are reading lines from a file you can reduce the amount of memory you use at any one time by assigning the lines to the hash one at a time--rather than storing the lines in an array and then doing a gang assignment like above.
In reply to Re^3: removing duplicates from a string list
by 7stud
in thread removing duplicates from a string list
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |