I see that output 2 has the duplicate record where the column 6 value 213 is less than 345 except that value is a later line not a previous line. To compare the value of previous and later lines you either have to scan the file twice or read all the lines into a structure and then create the output files.
This example scans twice
# hash to hold highest values my %col6=(); while (my $line = <$data>) { chomp $line; my @fields = split "," , $line, -1; my $key = $fields[1].$fields[2]; # store max values if ( $fields[5] > $col6{$key} ){ $col6{$key} = $fields[5]; } } # reset to start seek $data,0,0; # read file 2nd time while (my $line = <$data>) { chomp $line; my @fields = split "," , $line, -1; my $key = $fields[1].$fields[2]; # reject lowest duplicate if ( $fields[5] < $col6{$key} ){ # extra text added for debugging print OUTFILE_1 $line." - duplicate $key $col6{$key}\n"; } else { print OUTFILE $line."\n"; } }
Update : This simple example assumes column 6 values are never negative.
pojIn reply to Re: move the line if particular column is duplicate or more than 1 entries
by poj
in thread move the line if particular column is duplicate or more than 1 entries
by hyans.milis
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |