I'm parsing one input file for human chromosome locations, retrieve start and end position and then read in the chromosome file with scores to filter out a score at the specific locations - then calculate a mean score value for the entire region. The process is terribly inefficient, even if I split files. Each chromosome file is already around 3 GB. is there a better approach to this problem? Perhaps can I jump to the right locations in a speedier way? Help appreciated.
open (F, $data) || die; while(<F>) { chomp; if ($_!~/Header/g) { @line=split("\t", $_); $table=$_; $chr=$line[0]; $start=$line[2]; $end=$line[3]; # file name $name="score.txt"; @array=(); @sorted=(); #print "$chr\.$name\n"; open (R, "$chr\.$name") || die ("cannot open chrom file\n"); while(<R>) { chomp; # format : chrom ID, position, score # chr1 10925 0.239 @q=split("\t", $_); $pos=$q[1]; $score=$q[2]; #print "$pos\n"; if ($pos>=$start) { if ($pos<=$end) { push (@array, $score); $sum=$sum+$score; }} } close(R); # get mean min and max $len=$#array+1; @sorted=sort {$a <=> $b} @array; $min=$sorted[0]; $max=$sorted[$#sorted]; $mean=$sum/$len; $stat = Statistics::Descriptive::Full->new(); $stat->add_data(@sorted); $geo_mean= $stat->geometric_mean(); #print "$table\t$geo_mean\t$min\t$max\t$len\n"; print OUT "$table\t$mean\t$geo_mean\t$min\t$max\t$len\n"; } } close(F); close(OUT);
In reply to speeding up parsing, jump to line by cburger
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |