in reply to Recommendations for efficient data reduction/substitution application
Had a similar problem 10 years ago, i.e. analysing a weblog of ++1 GB/day by matching 3500 regex-patterns against each lines. With hardware at this time (2005) a run was predicted to need more than 24 hours for the log of one day.
So I created what I called "reverse matching", i.e. take a substring of the log-line and match it against the patterns. If the patterns have a fixed part, which can be anchored, a substring of width $w can be extracted from the log-line, and this substring is a key for a lookup in a hash of arrays of regex-patterns. This pattern-hash could be preprocessed once by extracting a substring of width $w at the same position of the regex-pattern, if there is a literal string there in the pattern.
The goal of the above algorithm is to reduce the complexity of O(n*p), where p is the number of patterns. If a pattern contains a fixed string, we can use this substring to select (restrict, filter) the set of patterns to apply. In my case it was possible to reduce the number of patterns to try per line from 3500 to 120. It depends on the structure of your data and patterns, if you can use a similar approach.
As others noted here, there is maybe room for reduction at higher levels, because substitution-regexes at such a high count smell unnecessary, against "Do only what you need." Maybe you can find some nice ideas in the excellent work of Tim Bunce: http://de.slideshare.net/Tim.Bunce/application-logging-in-the-21st-century-2014key
|
|---|