Obviously this approach scales horribly. How can I make it fast?
First approach: use perl 5.10 or newer, it has a very good optimization for many alternations of literal strings. You might want to increase ${^RE_TRIE_MAXBUF} to make them all your keys fit into the same trie.
There's also Regexp::Assemble, which is said to optimize the matching quite a bit, but I haven't tried it myself yet.
Another obvious improvement is to move the regex generation and compilation out of the loop.
Update: I've mis-read the original code. The key to speed is to assemble all keys into one regex (outside the loop), and match that. You can use named captures or (?{...}) code blocks to identify which key actually matched.
2nd update: To make my point a bit clearer, I've written a small script which generates 5k random search phrases.
use strict; use warnings; use 5.010; use List::Util qw(max); my @letters = ('a' ... 'z', (' ') x 5); my %words = map { join('', map { $letters[int rand(@letters)] } 1 .. max(5, rand(50) +)) => $_ } 1..5_000; my @keys_sorted_by_length_desc = sort { length $b <=> length $a } key +s %words; my $re = join '|', @keys_sorted_by_length_desc; $re = qr{($re)}i; while (<>) { chomp; if ($_ =~ $re) { say "found match $1 in '$_', $words{$1}"; } }
I've applied that script to roughly 500k lines from files in /usr/share/dict/. It takes about 0.6s to generate the random phrases and the regex, and 1.5s for precessing the 500k lines. I don't know if that's fast enough for you, I wouldn't describe it as "scales horribly" anymore :-)
In reply to Re: Matching Many Strings against a Large List of Hash Keys (case insensitively, longest key first)
by moritz
in thread Matching Many Strings against a Large List of Hash Keys (case insensitively, longest key first)
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |