sugar has asked for the wisdom of the Perl Monks concerning the following question:

Dear monks, Am having some problem in printing data from array of arrays. I am almost there. but, not able to print the header. here is the problem:
inputfile: >1_1 geneid1 34 45 len=10 AGTCGA GCAA >1_2 geneid1 54 75 len=21 AGTCGAAGTCGA ACAAACAAT >2_1 geneid1 78 83 len=5 CGTCG >1_1 geneid2 14 25 len=11 AGTCGAA GCAA >2_1 geneid2 4 12 len=8 AGTCGAAT >2_3 geneid2 19 27 len=8 AGTC GCAA >2_2 geneid2 89 95 len=6 AAAAAA --------------------------- facts: 1) this is just a sample. but the real file will be 2 GB in size. the +AGTC bases wil be in 1000's to millions. problem: 1) i have to join the sequences belonging to the same number and also +same geneid. 2) the output file wil be a list of sequences (each sequence wil be a +set of numbers joined together but have one header {which is my probl +em now that i am not able to print it} ). --------------------------------- sample output file: >1 gid1 AGTCGA GCAA AGTCGAAGTCGA ACAAACAAT >2 gid1 CGTCG >1 gid2 AGTCGAA GCAA >2 gid2 AGTCGAAT AAAAAA AGTC GCAA
i have removed the extra header information. now it has only the num id and geneid. point to notice, if u see the input file, the 2_2 shud join in the second place of the set geneid2. so i have joined that 2_2 sequence after 2_1 sequence and proceeded by 2_3. the script written so far on that:
use strict; use warnings; my @AoA = (); MAIN: while(<DATA>){ if (/^>(\d+)_(\d+)\s+geneid(\d+)/o) { my ($tops, $mids, $subs) = ($3, $1, $2); $tops -= 1; $mids -= 1; $subs -= 1; SUB: while(<DATA>){ redo MAIN unless (/^[ACGT]/o); chomp; push @{$AoA[$tops][$mids][$subs]}, $_; } } } for my $i (@AoA) { for my $j (@{$i}) { for my $n (@{$j}) { for my $r (@{$n}) { print $r,"\n"; } } } } __DATA__ >1_1 geneid1 34 45 len=10 AGTCGA GCAA >1_2 geneid1 54 75 len=21 AGTCGAAGTCGA ACAAACAAT >2_1 geneid1 78 83 len=5 CGTCG >1_1 geneid2 14 25 len=11 AGTCGAA GCAA >2_1 geneid2 4 12 len=8 AGTCGAAT >2_3 geneid2 19 27 len=8 AGTC GCAA >2_2 geneid2 89 95 len=6 AAAAAA
please guide.

Replies are listed 'Best First'.
Re: array of arrays - printing data
by ELISHEVA (Prior) on Mar 19, 2009 at 09:51 UTC

    The algorithm above requires one to reread the 2Gig files N+1 times. N is the number of records in the file! This is going to take a very long time.

    If your files are two gigs you would be better off using a single read of your data. During the first read you would populate a hash. The key of the hash would be something like "gene id|number". The values would be an array that stored all of the sequences associated with that number.

    I think you will also find printing out the header and genes much, much easier if you store the data in a hash. If you are concerned about the size of the hash (and at 2Gigs you should be), you might want to use a hash tied to a random access file - search CPAN for Tie::Hash for alternatives.

    I also notice that your records are divided by > You can greatly simplify the parsing process by setting the record separator variable $/ (see perlvar for details). If you set $/='>', then you can read in an entire record at once. You won't need to figure out whether the current line is the header or the list of sequences.

    The following code sample illustrates one way you might use $/ and a hash:

    #use local so that the setting doesn't interfere with #other places you might want to read in data. local $/='>'; #load sequences my %hSequences; while (my $line = <DATA>) { chomp $line; next if $line eq ''; #extract data from record # -- s at end of regex needed so that . matches new lines my ($subs, $gid, $sSequences) = $line =~ /^(\d+)_\d+\s+geneid(\d+)\s+\d+\s+\d+\slen=\d+\s+(.*)$/s; #populate hash my $sKey="$subs $gid"; my $aSequences = $hSequences{$sKey}; $hSequences{$sKey} = $aSequences = [] unless defined($aSequences); push @$aSequences, split(/\s+/, $sSequences); } #print results while (my ($k, $v) = each(%hSequences)) { $k =~ s/ / gid=/; print ">$k\n" . join("\n", sort @$v) . "\n"; }

    Best, beth

      thanks.. trying with hash, but i am no good at hash. still a beginner to hash. Thats the reason why i chose arrays :( infact i was trying to split the 2 gig file into smallers files and run the process in multiple servers to get the work done faster. anyways, i wil still try my luck on hash :(
      Thanks for your support and encouragement. i tried with your code after understanding them. its sounds clear to me. but i am not able to split the header and populate the hash. thats where i am stuck rite now. i tried using split method but in vain.
      finally, managed to get the results, but the problem is, though hashes are fast, ordering the sequence in a problem with hash. i had mentioned in my query in one of the example sets i have given which is >2 geneid2. the >2_2 must come in the second position only. even by using tie::hash, i can not do this. coz, the tie hash will follow the order of input file. what do i do in this case ?? plz suggest.
        ++ ++ ++ for pushing yourself to try new things! I also really like that you worked to understand the code and make it your own.

        A while loop always prints keys in the order they appear in the hash. To get the keys in the right order, you need a slightly different technique: (a) extract the keys (b) sort them (c) use a foreach loop to visit each key (d) for each key query the hash to get the value. The code to print out the hash would look something like this:

        # keys %foo: extracts the keys from hash %foo # and returns them as an array # # sort @foo: sorts the elements of array @foo # # foreach: loops through the my @aKeys = sort(keys(%hSequences)); foreach my $k (@aKeys) { my $v = $hSequences{$k}; #get value $k =~ s/ / gid=/; print ">$k\n" . join("\n", sort @$v) . "\n"; }

        For more information, see keys and sort.

        I'm assuming the number of keys is a lot less than 2G and that you have enough memory to hold the keys and sort them. If not, there are some other things that can be done to sort a super large key list, but see first you can do without getting fancy.

        Good luck and great work!

        Best, beth