http://qs1969.pair.com?node_id=522975

tomazos has asked for the wisdom of the Perl Monks concerning the following question:

I have a directory containing 50,000 text files for a total of 1.5GB of data.

I want to quickly grep it for a Perl regular expression, and get a list of occurances by file name and line number.

I could just:

use strict; use warnings; use File::Find; my $regexp = shift @ARGV; sub process_file { open FH, $File::Find::name; my $count = 0; while (<FH>) { if ( print $File::Find::name . " " . $count . "\n" if /$regexp/; $count++; } close; } find(\&process_file, @ARGV);

(or indeed just use unix grep)

...but speed is important and disk space is cheap.

My vision is to somehow run a process (overnight) that creates a large index of regexp-beginning vs files. (Possibly even 30GB+ if it would be useful.)

The index could cache the results for all possible starting prefixes every regexp to some depth limited by the index size. This would allow the search algorithm to skip some of the work.

I realize search is a hard problem, but has anyone got any ideas about:

Architecture? I'd have to reimplement or modify Perl's regexp parser to do this right? No way to tokenize a regexp to get a prefix?

CPAN modules that might be useful?

Caching algorithm for the index?

Alternative solutions (open source only)? Similiar projects?

General feedback about the idea?

-Andrew.