Monks,
I have an efficiency question. I need to create a super large tab delimited file. 4^7 by 4^7 in dimension. What I have now is a double loop which prints out each element individually in its appropriate location. This file is a symmetric square matrix. I am unsure whether it will take more time to save 0.5 *4^7 *4^7 entries to RAM so that I only calculate once, or whether it would be better to calculate each element twice inside the double loop. If any of the monks are especially bored, I would also appreciate comments on how I could improve the efficiency of the code below. Please forgive me if there are glaring errors, I am not a programmer, just a lowly grad student pretending to be one.
#!/usr/bin/perl use strict; use warnings; my @kmers=<>; chomp @kmers; my $kmer_l=length($kmers[0]); for(my $j=0;$j<@kmers;++$j){ print &map_constraint($kmer_l,&return_constraint($kmers[$j],$kmers +[0])); for(my $i=1;$i<@kmers;++$i){ print "\t",&map_constraint($kmer_l,&return_constraint($kmers[$ +j],$kmers[$i])); } print "\n"; } sub map_constraint{ my $dt=0.0000000001; ##input string length,and edit distance value return (-(($_[1]+$dt)/(($_[0]/2)+$dt)-1)) ##returns a value from ( +-1,1) } sub return_constraint{#string edit distance my ($len1, $len2) = (length $_[0], length $_[1]); if ($len1 == 0){ return $len2; } if ($len2 == 0){ return $len1; } my %hash; for(my $i=0;$i<=$len1;++$i){ for(my $j=0;$j<=$len2;++$j){ $hash{$i}{$j} = 0; $hash{0}{$j} = $j; } $hash{$i}{0}=$i; } my @a=split(//, $_[0]); my @b=split(//, $_[1]); for(my $i=1;$i<=$len1;++$i){ for (my $j=1;$j<=$len2;++$j){ my $cost=($a[$i-1] eq $b[$j-1]) ? 0 : 1; $hash{$i}{$j}=&min([$hash{$i-1}{$j} + 1,$hash{$i}{$j-1} + +1,$hash{$i-1}{$j-1} + $cost]); } } return $hash{$len1}{$len2}; } sub min{ my $min=${$_[0]}[0]; foreach my $elem (@{$_[0]}){ if($elem < $min){ $min=$elem; } } return $min;
In reply to Efficiency of implementation by azheid
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |