That doesn't work correctly.
You are jumping through a lot of contortions, seemingly to avoid explicitly opening files. That results in a lot of inefficiency; all that sorting could get expensive if the data sets are big. Using hashes for data that are already ordered and whose order you want to retain is a bad design decision.
And even with your comments, it's not particularly easy to read (for me; and I'm not new to Perl.)
The following might not win any style points, but it is straight forward and relatively efficient. And working.
my $header; # Handle the first file separately so we can keep the "categories." my $file = shift; $header = "CATS \t$file"; # CATS is category column header. open my $fh, '<', $file or die "$file: $!"; my @lines = <$fh>; close $fh; chomp(@lines); for my $file (@ARGV) { # Add filename to header. $header .= "\t$file"; open my $fh, '<', $file or die "$file: $!"; # Iterate through file my $i = 0; while (my $line = <$fh>) { chomp $line; # Append tab and 2nd column to appropriate line. $lines[$i++] .= "\t" . (split /\t/, $line)[1]; } close $fh; } # print our header and each of our new lines. print $header, "\n"; print "$_\n" for @lines;
In reply to Re^2: simple table join script
by sauoq
in thread simple table join script
by slavailn
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |