in reply to finding a set of relevant keys

First for making it more efficient.stem_in_place is a very good fit for this. however,because it will modify all your words you should have a way to keep track of the initial words. you can choose to make something like:

my @words_stemmed = @words; $stemmer->stem_in_place(\@words_stemmed); my @all; # will contain hashrefs like { original => 'books' , stemmed +=> 'book' } map{ +{ original => $words[$_], stemmed => $words_stemmed[$_], }; } 0..-1+@words;

I also think you should use ->stem_in_place with some kind of global cache(but that would mean adding this to the .xs if it's not already there).

For performance related issues you can use Gearman or POE to distribute the work on many machines.It's not a surprise that just "counting words" is time-consuming, a lot of things in NLP are.

PetaMem here on Perlmonks also does NLP so maybe he can also give some details on how this can be improved

UPDATE: took a closer look to the OP , read more carefuly