in reply to storable: too slow retrieval of big file.

salutations, we solved the problem by switching to a database-based solution, using BerkeleyDB. thus, we convert the hash TXT file to BerkeleyDB using the command-line program:
db_load -c duplicates=1 -T -t hash -f dict.txt dict.db
which converts "dict.txt" (with keys and values separated by a newline character, and each pair of lines being a record) to the BerkeleyDB database "dict.db", allowing duplicate keys. this solution turned out to work great. thank you for all the help.