|XP is just a number|
I have a directory containing 50,000 text files for a total of 1.5GB of data.
I want to quickly grep it for a Perl regular expression, and get a list of occurances by file name and line number.
I could just:
(or indeed just use unix grep)
...but speed is important and disk space is cheap.
My vision is to somehow run a process (overnight) that creates a large index of regexp-beginning vs files. (Possibly even 30GB+ if it would be useful.)
The index could cache the results for all possible starting prefixes every regexp to some depth limited by the index size. This would allow the search algorithm to skip some of the work.
I realize search is a hard problem, but has anyone got any ideas about:
Architecture? I'd have to reimplement or modify Perl's regexp parser to do this right? No way to tokenize a regexp to get a prefix?
CPAN modules that might be useful?
Caching algorithm for the index?
Alternative solutions (open source only)? Similiar projects?
General feedback about the idea?