http://qs1969.pair.com?node_id=11140211

sroux has asked for the wisdom of the Perl Monks concerning the following question:

Dears,

As many I use Perl as a swiss army knife, I may understand the code, write some but also copy and glue it together to make scripts used on a desktop (as utility) or server (file parsing, mapping etc.).

I would like to write an utility that reads a file (can be max 500mb) and output a result with unique values >> for each delimited field << (what I mean is not an doing an unique on the whole file content but field by field).

Looks like that hashes are a nice solution for this task but I never used hashes.

I found some scripts around:

#Create hash based on header

my %combined; open(my $input, "<", "flatfile.dat") or die ("Unable to open file"); my $line = <$input>; chomp ( $line ); my @headers = split("|", $line); while (<$input>) { chomp; my @row = split("|"); for my $header (@headers) { push( @{ $combined{$header} }, shift(@row) ); } }

Now how to get the unique value for each field and produce an output file. I had this piece of code used somewhere but I hardly can understand it:

#Remove duplicates my %seen = (); my @uniqueOutput = grep { ! $seen{ $_ }++ } @output; print $fh3 @uniqueOutput;

Thank you for any guidance you may provide.