in reply to Matching hash keys from different hashes and utilizing in new hash

How about reading the tables into a database and using SQL instead? Your files look closely enough to CSV, so you better use Text::CSV and especially Text::CSV_XS for reading instead of manual parsing. Add DBI and DBD::SQLite and you have a performant, serverless database. Part one of your program would read the CSV files and write them into the SQLite database. Or, even easier but slower, use DBI and DBD::CSV (that sits on top of Text::CSV) to make your CSV files appear as tables in a relational database. Part two would just query the database.

Update: Why a database? Because it can easily handle input files significantly larger than your available RAM. With pure hashes, you are limited by available RAM. You don't have to use SQLite, but it is a good start for tests. If things grow bigger, I would recommend using PostgreSQL. If you have a commercial RDBMS around (Oracle, MS SQL Server, ...), you may as well use that.

Alexander

--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
  • Comment on Re: Matching hash keys from different hashes and utilizing in new hash

Replies are listed 'Best First'.
Re^2: Matching hash keys from different hashes and utilizing in new hash
by Laurent_R (Canon) on Oct 21, 2017 at 21:28 UTC
    How about reading the tables into a database and using SQL instead? ... Or, even easier but slower, ...
    Yes, you could use a database. This might even be the best solution if the files are truly huge and can't fit into memory.

    But if speed matters (and assuming the data is not too large for the available memory), the hash solution I suggested would be completed long before the data is stored into the database and you even start to query the database.