When a hash based solution starts to choke because of the amount of data, the easiest solution is to use an on-disk hash. See BerkeleyDB or DB_File. The only change necessary will be to load the module and tie the hash. The rest of the script will stay the same.
Jenda
Enoch was right!
Enjoy the last years of Rome.
In reply to Re: DBI::SQLite slowness
by Jenda
in thread DBI::SQLite slowness
by Endless
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |