in reply to Matching data between huge files

If the second file is sorted, you could use the core module Search::Dict for this. It would find an index in a million line file in no more than about 20 disk reads.

If you want a more efficient lookup, you can use a dbm file of some sort for this. People usually recommend BerkeleyDB (also accessible through DB_File) for this. Perl comes with SDBM_File. The drawback is that you have to build the dbm_file. For very large files I recommend sorting your input data and using a btree implementation (that is an option with Berkeley DB).

A third option that is worth considering is that you could use DBI and DBD::SQLite to create a real relational database. For this simple problem it is not worthwhile. But if your data structure is going to get more complex, or you might have several related tasks like this, then that is well worth considering.