in reply to parsing a very large array with regexps

Have you tried the regexp alternation operator "|" to do the search with a single regexp,
@biff = grep(/foo|bar|baz/, @db)
This may help somewhat by reducing the number of times the regexp engine needs to be started up.

Replies are listed 'Best First'.
Re^2: parsing a very large array with regexps
by pc2 (Beadle) on Aug 19, 2007 at 17:39 UTC
    salutations, actually, we have already tried to search with a single regexp containing several patterns separated by the | operator, but it still takes too long.

      I was going to suggest Regexp::Assemble as a way of producing an optimised regexp, but if even a single regexp takes too much time then that approach is not worth persuing.

      If there is only a small population of sets of regexps used, then you should investigate caching (precomputing) the result sets of what different sets of regexp achieve when applied to the source list. Then, when you know that regexps A, B, D and G are called for, go and fetch the results that correspond to those regexps.

      If the regexps are truly arbitrary, then you have little choice but to pay the full cost each time (possibly caching the result in case it is called for again later on).

      • another intruder with the mooring in the heart of the Perl

      Oh. I think I see the problem. How many patterns are you searching for, what do the patterns actually look like, and how long is "too long"? With a 2 GHz Pentium IV, I can search a 2.2 Mb file with 274K lines for three five character patterns in about 1.35 seconds, and thirteen in 2.00 seconds (each additional pattern adds around .065 seconds or so.)

      Those numbers are for simple patterns like /abcde|bdfhj|acegi/. If your patterns are more like

      /a[bc]+d|[^efg](h|i)|klm*n+/
      each pattern will might take on the order of ten times as long.

      If you have any pathological patterns with nested * or + qualifiers, things can get very slow indeed.