The first while loop is working correctly: it extracts the words from the first text file, removes non-alphanumeric characters, converts the words to lower case, and stores them as the keys in a hash. (Here, as often in Perl, the value stored with the key is irrelevant.) So far, so good. (Note: the line $line =~ s/[[:punct:]]//g; isn’t needed, because all punctuation characters are removed in the subsequent substitution: $word =~ s/[^A-Za-z0-9]//g;.)
The second while loop is more complicated, and that’s where things fall apart. I don’t think I follow the intended logic here; but, rather than try to fix it, it will be simpler — and clearer — to re-think the algorithm.
The easiest approach is to simply repeat the logic of the first while loop and create a second hash containing the words in the second text file. (It will help if you rename the first hash %results to something like %words1, then the second hash can be named %words2.) You now have only to find which keys are common to both hashes, and that will give you the desired result:
my $counter = 0; for my $key (sort keys %words1) { if (exists $words2{$key}) { ++$counter; print $key, "\n\n"; } } print "Found $counter words in common\n";
You may also benefit from studying the FAQs in Data: Hashes (Associative Arrays).
Hope that helps,
| Athanasius <°(((>< contra mundum | Iustus alius egestas vitae, eros Piratica, |
In reply to Re: Extracting common words
by Athanasius
in thread Extracting common words
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |