http://qs1969.pair.com?node_id=11140230


in reply to Get unique fields from file

At least this does not fail any of the test cases you have provided :)

#!/usr/bin/perl use strict; # https://perlmonks.org/?node_id=11140211 use warnings; my $data =<< "END"; head1|head2|head3 val1|val2|val3 val1|val4|val5 val6|val4|val5 val2|val7|val5 val3|val7|val3 END open my $fh , "<", \$data or die "can't open input file $!"; my @headers = split /[|\n]/, <$fh>; my %seen; while( <$fh> ) { my @row = split /[|\n]/; $seen{$_}{shift @row}++ for @headers; } print join "\n", "UNIQUE VALUES for $_:", (sort keys %{ $seen{$_} }), +"\n" for @headers;

Outputs:

UNIQUE VALUES for head1: val1 val2 val3 val6 UNIQUE VALUES for head2: val2 val4 val7 UNIQUE VALUES for head3: val3 val5

Replies are listed 'Best First'.
Re^2: Get unique fields from file
by Marshall (Canon) on Jan 07, 2022 at 03:49 UTC
    I do like this general approach, however the OP is talking about a significant sized file of 500 MB. Depending upon the data of course, your HoH (hash of hash) structure could consume quite a bit more memory than the actual file size in MB.

    { head1 => { val1 => 2, val2 => 1, val3 => 1, val6 => 1 }, head2 => { val2 => 1, val4 => 2, val7 => 2 }, head3 => { val3 => 2, val5 => 3 }, }
    I came up with a representation (at this post) where the column values only occur once as hash keys and the value of each hash key is an array describing whether a value: appears or doesn't appear at all in column, whether a value only appears once in a column, whether a value occurs more than once in a column.

    We both interpreted "unique" to mean different things.
    I see you think that means: "don't repeat yourself after having said something once".
    I thought it meant: "don't say anything at all if you would repeat yourself".

    My data structure:

    { val1 => [-1], # val1 occurs more than once in col 1 val2 => [2, 1], # val2 occurs once in col 1 and once in col 2 val3 => [-3, 1], #val3 occurs more than once in col 3 #but only one time in col1 val4 => [-2], val5 => [-3], val6 => [1], val7 => [-2], #val7 is mentioned at least twice in col2 }
    Of course I could generate your same output from my data structure because I know the columns where the term appeared more than once.

      Yep. I couldn't tell if what was wanted was "unique in row", "unique in column" or "only singletons in column", so I just thought I'd toss something out there.

      I do have a solution that takes practically no memory (uses external sort -u), but I'll wait to see responses to the proposed answers first.

        I see that we both had the same interpretation issue...

        As far as sort -u goes, I didn't know if the OP is on Windows, Unix or other platform. There is supposed to be an undocumented sort switch on Win10: sort /UNIQ, but I didn't bother to test that. This can be done in another way in the new PowerShell, but I didn't worry about that either. I also circled back to "what the heck does unique mean?".