You might try just having it fetch (and maybe print) $seq for every row. That'd give you a good idea of how much of your time is spent in hash lookup and storage, and row processing. Personally, I suspect that fetching a million rows one at a time might be slow. Alternately, you could consider profiling your code using Devel::NYTProf or something similar. Hash random access is pretty fast by design, but doing *anything* a million times can slow things down.
In reply to Re: Efficiency of a Hash with 1 Million Entries
by ssandv
in thread Efficiency of a Hash with 1 Million Entries
by gunr
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |