in reply to Compare two large files and extract a matching row...

The problem here is that storing too much data in an array will give you a memory problem very fast (especially for large files)

Placing the data in a hash is a great solution (as mentioned above) but for large files that might still be a problem
What might help you out here, is that Perl is amazingly fast in reading files. I've seen it go through more than 40 GB of data in minutes. This means if your hash is to big you can split File1 into smaller hashes and then go over File2 a few times and it won't take you very long

Hope this helps
Mr Guy
  • Comment on Re: Compare two large files and extract a matching row...