The algorithm above requires one to reread the 2Gig files N+1 times. N is the number of records in the file! This is going to take a very long time.
If your files are two gigs you would be better off using a single read of your data. During the first read you would populate a hash. The key of the hash would be something like "gene id|number". The values would be an array that stored all of the sequences associated with that number.
I think you will also find printing out the header and genes much, much easier if you store the data in a hash. If you are concerned about the size of the hash (and at 2Gigs you should be), you might want to use a hash tied to a random access file - search CPAN for Tie::Hash for alternatives.
I also notice that your records are divided by > You can greatly simplify the parsing process by setting the record separator variable $/ (see perlvar for details). If you set $/='>', then you can read in an entire record at once. You won't need to figure out whether the current line is the header or the list of sequences.
The following code sample illustrates one way you might use $/ and a hash:
#use local so that the setting doesn't interfere with #other places you might want to read in data. local $/='>'; #load sequences my %hSequences; while (my $line = <DATA>) { chomp $line; next if $line eq ''; #extract data from record # -- s at end of regex needed so that . matches new lines my ($subs, $gid, $sSequences) = $line =~ /^(\d+)_\d+\s+geneid(\d+)\s+\d+\s+\d+\slen=\d+\s+(.*)$/s; #populate hash my $sKey="$subs $gid"; my $aSequences = $hSequences{$sKey}; $hSequences{$sKey} = $aSequences = [] unless defined($aSequences); push @$aSequences, split(/\s+/, $sSequences); } #print results while (my ($k, $v) = each(%hSequences)) { $k =~ s/ / gid=/; print ">$k\n" . join("\n", sort @$v) . "\n"; }
Best, beth
In reply to Re: array of arrays - printing data
by ELISHEVA
in thread array of arrays - printing data
by sugar
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |