in reply to Re: finding duplicate data
in thread finding duplicate data

Useless use of cat, and a misunderstanding of uniq (it only looks for duplicate adjacent lines). Instead, use
sort -u filename
to find unique data (though the perl solution would be doing less work, and would not scramble the line order).

To list the duplicate entries only once,

perl -ne 'print if $h{$_}++ == 1' filename

The PerlMonk tr/// Advocate