in reply to Loading file into memory

I know I'm not really answering your question (but that's because I suspect an XY Problem here), but I'd likely approach the same issue completely backwards. Instead of loading anything into memory, I'd likely start by trying to transform your CSV file into a CSV file that has all the information of interest in it. And then I'd use DBD::CSV (again, since I probably would have used it both to read the original CSV file and write the transformed CSV file) to create SQL JOIN queries to get the information I want out of each.

The advantage to this is that I could then migrate to another database (which is where data should be), such as mySQL, PostgreSQL, SQLite, or even DB2 or Oracle or whatever, when I need more speed. I suspect that this would be faster than whatever you do in perl... even if, at the beginning of each run, I need to pre-populate the database by loading from the CSV files. Even before migrating to a database, I don't expect this to be hugely slower than your approach, but it does scale better.