in reply to Efficient search through a huge dataset

Although not a Perl solution, if you're on a Unix-like platform, you have access to the standard tool comm, which does exactly what you want, provided that the input files are sorted. Comm can tell you which lines are shared by the files and which are unique to either file. For example, here is how you would find the records that are shared by the files:
$ comm -12 <(sort -u file1) <(sort -u file2)
(If the files are already sorted, you can just pass them directly to comm, without first processing with sort. Here, I'm using the bash shell's <(command) syntax to avoid using having to deal with temporary files for holding the sorted records.)

Here's how to find the records that are unique to the first file:

$ comm -23 <(sort -u file1) <(sort -u file2)
Most sort implementations are fast and will use external (file-based) sorting algorithms when the input is large, so you don't need to worry about input size.

Cheers,
Tom