I'm trying to speed up a few regex's in a script that gets called a few dozen times a day. Each invocation basically loops through a ton of source code and builds a sort of searchable index.
The problem is one single run of this script is now taking more than a day to run. There's some parallel-ization that can be done, but I'm hopeful there's something to be gained within each script as well.
The script in question is MXR's "genxref": here
Here's a relevant NYTProf run (one of the dozens that gets run daily, across different source repos): here. You can see some lines are getting hit a million times or more.
Here's a good example fragment:
879 # Remove nested parentheses. 880 while ($contents =~ s/\(([^\)]*)\(/\($1\05/g || 881 $contents =~ s/\05([^\(\)]*)\)/ $1 /g) {}
This is one problematic snippet, but hardly the only one... the script is littered with complicated regex's. Most of them quick enough as-is, but some (like above) have become a significant performance bottleneck as our code base as grown.
How might I improve upon this situation? Specific improvements and general ideas both welcome... I know the basics from a theoretical perspective (don't capture if you don't have to, try not to backtrack, etc), but not how to spot/fix problems. I don't have enough real-world experience with this.
Thanks!
In reply to tight loop regex optimization by superawesome
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |