my $file1="file1.txt"; open FILE1, "<$file1" or die $!; my $file2="min_viols_endpointSorted.csv"; open FILE2, ">$file2" or die $!; while(<FILE1>){ my $path = $_; $path =~ /([^\s]+)/; $path = $1; #Extracting path chop($path); my $slack = $_; $slack =~ /[^\f+][\s+][\f+][\s+][\f+][\s+]([\f+]+)[\s](VIOLATED)/; $slack = $1; print "$slack\n"; chop($slack); print FILE2 "$path $slack\n"; }
After this I plan to read the csv into a hash and compare the values of hash keys. If one hash value for a duplicate key is less than previous value than put it in a output file. And output should be unique, as I have shown in my question. Please help on this.
In reply to Re^2: Sorting unique values in file using perl
by PerlSavvy
in thread Sorting unique values in file using perl
by PerlSavvy
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |