While there are optimizations that are generally applicable, the greatest speed-ups generally come by taking advantage of specifics of an individual use-case. So it would be useful to have a more completely representative sample of the breadth of patterns you're likely to use.
A likely aspect of any solution is, as shown by AnomalousMonk's code, to match many patterns and then use a hash lookup to find the string to substitute.
It seems likely that making it easy to modify the set of substitutions may involve having a separate pre-processing step to combine some or all of them into larger regexps - either internally, as in AnomalousMonk's code, or as a separate program that writes out perl code for the combination.
Combining the regexes into one large sequence (s/^[0-9].*\s//m|s/\S*?talk\S*\s/ talk /gi...) didn't help either ...
This is just doing multiple, separate substitutions. The win will come from combining them into a single pattern (or failing that, into a smaller number of patterns).
In reply to Re: Need to speed up many regex substitutions and somehow make them a here-doc list
by hv
in thread Need to speed up many regex substitutions and somehow make them a here-doc list
by xnous
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |