Independently I have arrived at a similar solution.
#!/usr/bin/env perl use strict; use warnings; my %excluded = map { $_ => 1 } qw( a about although also an and another are as at be been before between but by can do during for from has how however in in +to is it many may more most etc ); my %count; { local $/ = ""; while (<>) { tr {A-Z':@~,.()?*%/[]="-}{a-z}d; foreach (split) { $count{$_}++ unless $excluded{$_}; } } } foreach my $word (sort { $count{$a} <=> $count{$b} or $a cmp $b } keys + %count) { print "$count{$word} $word\n"; }
I've leveraged the requirement of only lowercasing the ascii letters by incorporating it into the tr/// and I've gone for paragraph mode instead of a single slurp, just in case :-)
Both solutions run in similar times and about 100x faster than the original code:
$ time ./11116620.pl < Frankenstein.txt > orig.out real 0m9.381s user 0m9.366s sys 0m0.007s $ time ./wordcount.pl < Frankenstein.txt > hippo.out real 0m0.089s user 0m0.081s sys 0m0.008s $ time ./tybalt.pl < Frankenstein.txt > tybalt.out real 0m0.090s user 0m0.084s sys 0m0.005s
There are some minor differences between all three outputs but without a tighter spec these aren't overly concerning.
In reply to Re^2: Counting and Filtering Words From File
by hippo
in thread Counting and Filtering Words From File
by maxamillionk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |