in reply to Count the occurrence of an element in an array
There are two big issues, and one bug. The first issue is that you're slurping the entire file into memory at once. The second issue is that you're splitting the resultant string into individual single-character array elements. Perl's array elements internally consume more than the size of the primary data, so what might have been a 32MB string might become a 320MB array (just as a shooting from the hip example).
The bug is this line: if($j!=/^>/).... You probably intended to use the !~ operator instead.
What follows is a solution that avoids holding more than a single copy of the data set, and avoids splitting it out into a big array. ...it is, however, untested since I don't have your data set handy. ;)
use strict; use warnings; use diagnostics; open PROTEOME, 'human_complete_proteome_without_isoforms.fasta'; my $peptide; while( <PROTEOME> ) { chomp; $peptide .= $_ unless /^>/; } print "The proteome is loaded\n"; my %count; my $aminoacid_ix = 0; for( my $aminoacid_ix = 0; $aminoacid_ix < length $peptide; ++$aminoac +id_ix ) { my $aminoacid = substr $peptide, $aminoacid_ix, 1; $count{$aminoacid}++; } print "Finished searching the document\n"; foreach my $aminoacid ( keys %count ) { print "$aminoacid\t occurs $count{$aminoacid} times \n"; }
There are probably additional opportunities to improve on memory efficiency if I knew more about the data. In particular, it's probably safe to do all the reduction work in the same loop that reads the file (and that's exactly what hdb did in his solution), but I am hesitant to incorporate that into my solution without seeing a small representative sample of the data (even though hdb's solution probably works just fine).
Updated: Improved efficiency of file reading loop.
Dave
|
|---|