in reply to Efficient way to handle huge number of records?
As stated above, yes, Perl can handle a hash of this size - assuming sufficient memory. However, this is probably not ideal on a lot of computer systems. I would highly recommend either a MySQL database or a restructuring of the data to a fixed-length, sorted format that you can then do an ordered lookup on whenever you need to locate a record. The latter method does require loading all the data into memory and then sorting it once, but you could theoretically do that on someone else's computer if necessary, since it's a one-time thing. The sorted data, once written to disk, would take virtually no overhead or lookup time to locate a specific record (specifically, log2 n, where n is the total number of records). The MySQL database would have the advantage of more flexibility, and you don't necessarily have to feed MySQL the records one at a time - you could just convert the data line by line to INSERT commands and then save as .sql and do an import all at once. Probably a way to do it directly from CSV or delimited or fixed-length as well. As long as the database is on the same computer as the data file, you should get pretty fast import.