I find the same problem cropping up with XML -- XML is hierarchical (looks organized) and is very easily parsed by reasonably advanced pattern recognition systems, such as regexes or more often the human brain. This is why people like putting everything into XML -- they can easily make sense of it. Computers, however, are very lousy (read: slow) at dealing with strings, just like they are with XML.
When you put your data into a format that computers are good at -- e.g. numbers -- the result is code that executes even faster than the hash case. I timed it with Time::HiRes; the loop AND the initial transliteration together take less time than using a hash.
# a computer is processing this data. # put the data into a form the computer handles better. $genome =~ y/atcg/0123/; # voila for ($i=0;$i<length($genome)-1;$i+=2) { $sums[substr($genome,$i,2)]++; }
In reply to Re: how can I speed up this perl??
by Stevie-O
in thread how can I speed up this perl??
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |