http://qs1969.pair.com?node_id=11140231


in reply to Get unique fields from file

G'day sroux,

Given the size of your data, processing speed may be a factor. The following may be faster than other proposed solutions; but do Benchmark with realistic data.

I've used the same input data as others have done.

$ cat pm_11140211_flatfile.dat head1|head2|head3 val1|val2|val3 val1|val4|val5 val6|val4|val5 val2|val7|val5 val3|val7|val3

My code takes advantage of the fact that when duplicate keys are used in a hash assignment, only the last duplicate takes effect. A short piece of code to demonstrate:

$ perl -e ' use Data::Dump; my %x = (a => 1, b => 3, c => 4); my %y = (b => 2, c => 3, d => 4); my %z = (%x, %y); dd \%z; ' { a => 1, b => 2, c => 3, d => 4 }

So there's no need for %seen, uniq(), or any similar mechanism, to handle duplicates.

Also note that I've used bind_columns(). See the benchmark in "Text::CSV - getline_hr()".

The code:

#!/usr/bin/env perl use strict; use warnings; use autodie; use Text::CSV; my $infile = 'pm_11140211_flatfile.dat'; my $csv = Text::CSV::->new({sep_char => '|'}); open my $in_fh, '<', $infile; my $row = {}; my @cols = @{$csv->getline($in_fh)}; $csv->bind_columns(\@{$row}{@cols}); my %data = map +($_, {}), @cols; while ($csv->getline($in_fh)) { $data{$_} = { %{$data{$_}}, $row->{$_}, 1 } for @cols; } print "$_: ", join(', ', sort keys %{$data{$_}}), "\n" for sort @cols;

The output:

head1: val1, val2, val3, val6 head2: val2, val4, val7 head3: val3, val5

— Ken