in reply to Re: Bulk Regex?
in thread Bulk Regex?
Interesting idea... I wonder if dynamically resorting as you go would help?
At the very least, you could do this once, and then feed the results back into the top of the script. If the frequencies are about constant (which makes sense for department populations in a large firm, I guess), this should make a lot more sense than dynamically resorting each time...my @codes = qw/ CODE1 CODE2 CODE3 /; my %hitCounts; my @regex = map { $hitCounts{$_}=1; qr/$_/i } @codes; # tune this parameter for optimal performance, balancing better orderi +ng # of regexen with sort costs... my $resortFreq=1000; my $iterCount=0; while (my $inputline = <FILE>) { my $found = 0; foreach my $regex(@regex) { if ($inputline =~ /$regex/) { $hitCounts{$regex}++; $found = 1; last; } } # re-sort every 1000 lines. The "1000" is a parameter that prolly + should # be tuned if(++$iterCount%$resortFreq == 0) { @regex=sort {$hitCounts{$b}<=>$hitCounts{$a}} @regex; } next unless $found; # logic goes here }
An other alternative would be to have the end block write out a file of regexen, which the script could read back in. Then each run would be as good as it could be, based on the results of the previous run...END { print "Regexen in sorted order:\n\t"; print join "\n\t",sort {$hitCounts{$b}<=>$hitCounts{$a}} @regex; print "\n"; }
|
|---|