in reply to Matching data between huge files

Which works really fast, but then I don't want to load a million lines into memory at once.

Why not?

As the "keys" for matching are numeric, you can save some space by using a bitstring for your lookup:

use autodie qw(open close); open my $fip, '<', 'file-1'; open my $fop, '<', 'file-2'; my $lookup = ''; while (my $line = <$fop>) { chomp $line; vec( $lookup, $line, 1 } = 1; } close $fop; while (my $line = <$fip>) { my @token = split(/-/, $line, 2); if ( vec( $lookup, $token[0], 1 } ) { print $line; } } close $fip;

For a range upto 1 million, that will use <1/4MB for the lookup, rather than close to 100MB and should be at least as fast.

Update: Actually it is ~10x faster than the hash lookup, though you probably won't notice the difference as you will be limited by the time taken to read the second file from disk. But in any case, it will be 2 or 3 orders of magnitude faster than any DB solution.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."