in reply to Auto-Expansion of Grep Function

Optimization is of course a science and it should be done by locating hotspots with a profiler (Devel::NYTProf), and attacking the places where the code is spending the most time <update2> and using a module like Benchmark to test and compare alternatives </update2>. But here are some rough rules of thumb / unverified gut feelings based on experience (every mention of "slow" below is therefore subjective):

So personally my starting point would be something like this, based on Building Regex Alternations Dynamically:

my @strings_to_match = ('192.168.200.', '10.10.200'); my ($regex) = map { qr/$_/ } join '|', map {quotemeta} sort { length $b <=> length $a } @strings_to_match; my @filtered_result; while (<>) { push @filtered_result, $_ if m{ ,SEVERE, .* $regex }x; }

Note that I am only recommending this solution because you said "it didn't really matter to me where the IP address I was filtering was located, (just that it was there)". Otherwise, I would recommend Text::CSV_XS for loading CSV files.

I was hoping to find a solution, however, where the grep function could be expanded (nested?) an arbitrary number of times based on the different match strings that we get.

No, sorry, it doesn't work that way*, however, you can make the matching logic for a single loop (while/foreach/grep) as complex as you need, as I showed above. I would not recommend building a string of Perl code and evaling it either, because it's much too easy to get burned (and in some cases expose security holes).

Minor edits for clarity.

* Update: The solution would be "lazy lists", which is something that can be implemented by iterators - although this is probably much too advanced for now (and probably wouldn't give you a performance gain either), Dominus's book Higher-Order Perl is a wonderful read on that topic.