go ahead... be a heretic | |
PerlMonks |
Re: Efficiency of a Hash with 1 Million Entriesby ssandv (Hermit) |
on Jul 01, 2010 at 21:49 UTC ( [id://847629]=note: print w/replies, xml ) | Need Help?? |
You might try just having it fetch (and maybe print) $seq for every row. That'd give you a good idea of how much of your time is spent in hash lookup and storage, and row processing. Personally, I suspect that fetching a million rows one at a time might be slow. Alternately, you could consider profiling your code using Devel::NYTProf or something similar. Hash random access is pretty fast by design, but doing *anything* a million times can slow things down.
In Section
Seekers of Perl Wisdom
|
|