YanDas has asked for the wisdom of the Perl Monks concerning the following question:

I am an inexperienced programmer in Perl so you might find many odd things in my following script. It is supposed to read text files piped in from stdin and create a dictionary out of the words found in it. The script actually works at a reasonable speed (only a bit slower than "tr -cs '\na-zA-Z' '\n'|tr A-Z a-z|sort -u|egrep -v '^.?$|^$|(\w)\1\1\1\1'"), with minimal memory requirements (~7Mb), for most of the input texts. The problem is that when the input is an actual dictionary (sorted word list with one word per line) the performance is trully awful. The program consumes huge amounts of RAM (maybe more than 100MB for a 10MB input file) and works very slowly (at 1/10th of the initial speed) without actually having to do much work. I have tracked down the performance penalty in the addition of new elements in the hash which on the other hand has no problem at all when it works on normal text files. I have tried several tricks but without success. Could this be a problem of the perl garbage collector or just poor programming from my side?
while ($line=<STDIN>){ $line=lc $line; $line=~s/[^a-z ]+/ /g; foreach $word (split (/\s+/,$line)){ if ($word!~m/(\w)\1\1\1\1/ && $word!~m/^.?$/){ $hash{$word}++; }}} foreach $wo (sort (keys %hash)){ print "$wo\n"; }