in reply to array of arrays - printing data

The algorithm above requires one to reread the 2Gig files N+1 times. N is the number of records in the file! This is going to take a very long time.

If your files are two gigs you would be better off using a single read of your data. During the first read you would populate a hash. The key of the hash would be something like "gene id|number". The values would be an array that stored all of the sequences associated with that number.

I think you will also find printing out the header and genes much, much easier if you store the data in a hash. If you are concerned about the size of the hash (and at 2Gigs you should be), you might want to use a hash tied to a random access file - search CPAN for Tie::Hash for alternatives.

I also notice that your records are divided by > You can greatly simplify the parsing process by setting the record separator variable $/ (see perlvar for details). If you set $/='>', then you can read in an entire record at once. You won't need to figure out whether the current line is the header or the list of sequences.

The following code sample illustrates one way you might use $/ and a hash:

#use local so that the setting doesn't interfere with #other places you might want to read in data. local $/='>'; #load sequences my %hSequences; while (my $line = <DATA>) { chomp $line; next if $line eq ''; #extract data from record # -- s at end of regex needed so that . matches new lines my ($subs, $gid, $sSequences) = $line =~ /^(\d+)_\d+\s+geneid(\d+)\s+\d+\s+\d+\slen=\d+\s+(.*)$/s; #populate hash my $sKey="$subs $gid"; my $aSequences = $hSequences{$sKey}; $hSequences{$sKey} = $aSequences = [] unless defined($aSequences); push @$aSequences, split(/\s+/, $sSequences); } #print results while (my ($k, $v) = each(%hSequences)) { $k =~ s/ / gid=/; print ">$k\n" . join("\n", sort @$v) . "\n"; }

Best, beth

Replies are listed 'Best First'.
Re^2: array of arrays - printing data
by sugar (Beadle) on Mar 19, 2009 at 11:13 UTC
    thanks.. trying with hash, but i am no good at hash. still a beginner to hash. Thats the reason why i chose arrays :( infact i was trying to split the 2 gig file into smallers files and run the process in multiple servers to get the work done faster. anyways, i wil still try my luck on hash :(
Re^2: array of arrays - printing data
by sugar (Beadle) on Mar 19, 2009 at 17:27 UTC
    Thanks for your support and encouragement. i tried with your code after understanding them. its sounds clear to me. but i am not able to split the header and populate the hash. thats where i am stuck rite now. i tried using split method but in vain.
Re^2: array of arrays - printing data
by sugar (Beadle) on Mar 19, 2009 at 18:18 UTC
    finally, managed to get the results, but the problem is, though hashes are fast, ordering the sequence in a problem with hash. i had mentioned in my query in one of the example sets i have given which is >2 geneid2. the >2_2 must come in the second position only. even by using tie::hash, i can not do this. coz, the tie hash will follow the order of input file. what do i do in this case ?? plz suggest.
      ++ ++ ++ for pushing yourself to try new things! I also really like that you worked to understand the code and make it your own.

      A while loop always prints keys in the order they appear in the hash. To get the keys in the right order, you need a slightly different technique: (a) extract the keys (b) sort them (c) use a foreach loop to visit each key (d) for each key query the hash to get the value. The code to print out the hash would look something like this:

      # keys %foo: extracts the keys from hash %foo # and returns them as an array # # sort @foo: sorts the elements of array @foo # # foreach: loops through the my @aKeys = sort(keys(%hSequences)); foreach my $k (@aKeys) { my $v = $hSequences{$k}; #get value $k =~ s/ / gid=/; print ">$k\n" . join("\n", sort @$v) . "\n"; }

      For more information, see keys and sort.

      I'm assuming the number of keys is a lot less than 2G and that you have enough memory to hold the keys and sort them. If not, there are some other things that can be done to sort a super large key list, but see first you can do without getting fancy.

      Good luck and great work!

      Best, beth

        hi elisheva, thank u so much. my work is done. it took 10 seconds only to process a 300 MB file. i did some changes to modify the program according to the input file. i used a different technique to sort the sequence before seeing your last post.anyways, i still used the sorting the hash part for arranging the headers in the right order :)
        #use local so that the setting doesn't interfere with #other places you might want to read in data. local $/='>'; open(DATA,"seq.txt") or die "cannot not open"; #load sequences my %hSequences; while (my $line = <DATA>) { chomp $line; next if $line eq ''; #extract data from record # -- s at end of regex needed so that . matches new lines my ($subs, $xon, $gid, $sSequences) = $line =~ /^(\d+)_(\d+)\s+gid(\d+)\s+\d+\s+\d+\s+len=\d+\s+(.*)$/; #populate hash my $sKey="$gid $subs"; my $aSequences = $hSequences{$sKey}; $hSequences{$sKey} = $aSequences = [] unless defined($aSequences); push @$aSequences, split(/\s+/, $xon.$sSequences); } my @aKeys = sort(keys(%hSequences)); foreach my $k (@aKeys) { my $v = $hSequences{$k}; #get value $k =~ s/^/gid/; $single=join("", sort @$v); $single=~s/[0-9]//g; print ">$k\n" . $single . "\n"; }