You might try something like the following.
use strict; use warnings; use DB_File; my $hashfile = "hash.$$"; tie my %hash1, "DB_File", $hashfile, O_RDWR|O_CREAT, 0666, $DB_HASH or die "cannot open file $hashfile: $!"; open(my $fh1, "<", "file1.txt") or die "file1.txt: $!"; foreach (<$fh1>) { chomp; my @parts = split(/\s+/); $hash1{"$parts[1]#$parts[2]"} = $parts[4]; } close($fh1); open(my $fh2, "<", "file2.txt") or die "file2.txt: $!"; foreach (<$fh2>) { my @parts = split(/[\s>]+/); my $value = $hash1{"$parts[0]#$parts[1]"}; if( defined($value) and grep { $_ eq $parts[2] } split(/\//, $valu +e)) { print "$_"; } } close($fh2); untie %hash1; unlink($hashfile);
Using a tied hash will allow you to process larger data sets - where an in-memory hash would exceed available memory. You may not need this but a 500MB file will result in quite a large hash. Performance will be better if you can use an in-memory hash (i.e. if you don't use the tied hash).
Only one hash is built. There is no benefit building a hash from the second file as you never do lookup in that hash.
Some of your matching criteria weren't clear to me. Your descriptions and code seemed to differ and I wasn't sure what "single character" means with respect to a value like "A/TTA/AT". You may have to make some changes to the matching criteria in the second loop.
In reply to Re: hash to hash comparison on large files
by ig
in thread hash to hash comparison on large files
by patric
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |