I then ingest the three large .csv files put them into the three hash tables (takes about 6 minutes) ...
I am stumped
If you can fit your three hash tables into memory (and the quoted statement says you're doing just that), then I don't see why Eily's approach here (and johngg's identical approach) would present any implementation problem. The only stumbling block I can see is that the output hash might not also fit into memory. In this case, common key/value-set records could be written out to a file, perhaps a .csv file, for later processing.
If the problem you're facing is that the input data is so large that even one input hash won't fit into memory, then an approach of doing an "external" sort of each input file and then merging the sorted input files with a Perl script suggests itself. This approach scales well to input files of enormous size, far larger than anything that could be accommodated in system RAM, and large numbers of input files, and can still be executed in minutes (in most cases) rather than hours.
hash_1 (input)
key => [value_hash_1]
If this is the structure of your input hashes, I don't see why you're going to the extra (and superfluous) effort of putting each value into an anonymous array — and then taking it out again when you create the output hash value. In what you've shown us so far, each input hash key has only a single value; why stick it into an array?
Give a man a fish: <%-{-{-{-<
In reply to Re^3: multiple hash compare, find, create
by AnomalousMonk
in thread multiple hash compare, find, create
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |