in reply to Searching through text files

How efficient does it need to be? What system will this script need to run on? You might be better off not using Perl for this. For example, on Unix-like systems, the grep command might be just the ticket. And, it'll be more efficient (in most measures) than any Perl solution can be.

------
We are the carpenters and bricklayers of the Information Age.

Then there are Damian modules.... *sigh* ... that's not about being less-lazy -- that's about being on some really good drugs -- you know, there is no spoon. - flyingmoose

Replies are listed 'Best First'.
Re: Re: Searching through text files
by pzbagel (Chaplain) on Mar 23, 2004 at 18:28 UTC

    I have to disagree. Depending on how complex your regex is and what you want to do once you find the string in the file, grep is usually NOT more efficient than Perl. One major reason is that grep uses a text-directed regex engine. It searches for the "best match" in a string so it has to search through the whole string even if it finds a match before it reaches the end. Perl uses a regex-directed engine and it returns the "left-most match". So the instant it finds a match, the Perl regex engine returns that match and moves on.

    I've seen this demonstrated when dealing with very large log files(>2G). Perl was able to do in a few minutes what it was taking grep 10+ minutes to accomplish.

    Check out Jeffrey Friedl's book Mastering Regular Expressions. It's an absolutely fascinating read on regexes and regex engines.

    Later

      While this is definitely true for some regexps, Anonymous says in his question:

      I have to write a script to search for a string, it has to search for a string in about a thousand different text files.

      For strings, probably grep -F is fast enough.

      If, however, the needle string contains newlines or nul characters, then this may be difficult to achieve with grep, so it may be better for perl. Also, if the file has very long lines (or no newlines at all), you can't make grep print only where the string is, it either wants to print the whole line, or the line number, or only give you a truth value. In such cases, Perl may be better (or some other program). Also, on windows, if you only have find installed, no real grep, you may have to use Perl.

Re: Re: Searching through text files
by Anonymous Monk on Mar 23, 2004 at 17:03 UTC
    Hi dragonchild,

    thanks for your prompt response. I don't have an answer on how efficient it needs to be, I just want to know the quickest way to do the task. I guess all I was really looking for was the quickest technique for doing this task using perl.

    Thanks a lot
    Jonathan

      Below is a simple solution with little perl code and relying on standard UNIX tools (find and grep). grep -F is used for fast (fixed string instead of regexp) search. It searches all the files in $DIR and all its subdirectories and obtains the names of those matching $STRING. Note that Perl 5.8.0+ is required (for the safe version of open). If you don't have it you must do the shell escaping yourself.

      #!/usr/bin/perl # safe form of IPC open requires perl 5.8.0 use v5.8.0; my $DIR = '/some/directory'; my $STRING = 'hidden!'; open my $fh, '-|', 'find', $DIR, qw/-type f -exec grep -lF/, $STRING, +qw/{} ;/ or die $!; chomp (my @found = <$fh>); # @found now contains the list of files matching the string;