I had this piece of code used somewhere but I hardly can understand it:
Breaking that down into stages:-
Create an empty hash %seen that will track occurrences of elements in @output
Pass each element of @output one at a time into the grep as $_ for filtering
The first time a particular value occurs in $_ the hash value $seen{ $_ } will be empty, hence "false" and ! seen{ $_ } i.e. "not false" is "true" so that the $_ value passes out of the grep into @uniqueOutput
Note also the ++ post-increment operator in ! $seen{ $_ }++ that increments the value of $seen{ $_ } after the test ! seen{ $_ } has been done, which means that after the first occurrence of a particular value the hash for it will no longer be blank, i.e. "false", but 1, 2 ... etc. depending on how many times it occurs, which evaluates to "true" and therefore "not true" is "false" so that second and subsequent occurrences will not pass from the grep into @uniqueOutput
My preference is to limit the scope of %seen to a do block so that it isn't left lying around.
I hope this makes the process clearer for you but ask further if something is still not clear.johngg@abouriou:~/perl/Monks$ perl -Mstrict -Mwarnings -E ' say q{}; my @arr = qw{ a b c d c e f a b b g d }; my @uniq = do { my %seen; grep { not $seen{ $_ } ++ } @arr; }; say qq{@uniq};' a b c d e f g
Cheers,
JohnGG
In reply to Re: Get unique fields from file
by johngg
in thread Get unique fields from file
by sroux
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |