Ananda has asked for the wisdom of the Perl Monks concerning the following question:

Hello Monks, I am trying this

my @arr = `grep -l "<value>IsFrequentlyUsed</value>" *

to read and populate an aray with files containing the required information. This works fine.

The problem is, When this perl file is invoked from another application the "grep" throws up errors.

I am seeking solution , in populating the @arr by using the perl's grep function or in any other way without invoking the system "grep" command

All your thoughts and guidance are appriciated Thanks and regards,

Ananda

Replies are listed 'Best First'.
Re: Alternative to using system grep
by Corion (Patriarch) on Aug 18, 2004 at 06:47 UTC

    There is also the quite useful module File::Find::Rule, which makes grepping files easy:

    use strict; use File::Find::Rule; my @files = File::Find::Rule ->file() ->grep(qr!<value>IsFrequentlyUsed</value>!) ->in('.');

      Except that will descend into subdirectories. You need a ->maxdepth(1) in there.

      Makeshifts last the longest.

      Alternately, you may also want to take a look at merlyn's File::Finder. It has a nice clean interface.

        I still don't understand why that interface is supposed to be any better than File::Find::Rule's.

        Makeshifts last the longest.

Re: Alternative to using system grep
by Zaxo (Archbishop) on Aug 18, 2004 at 05:33 UTC

    You're missing the closing backtick and semicolon.

    It is pretty easy to emulate system grep -l in perl,

    my @arr = grep { local $/; open my $fh, '<', $_ or die $!; my $text = <$fh>; $text =~ /<value>IsFrequentlyUsed<\/value>/; } glob '*';
    If your files are large, you may wish to read them one line at a time, instead.

    After Compline,
    Zaxo

Re: Alternative to using system grep
by ysth (Canon) on Aug 18, 2004 at 05:27 UTC
    Here's one way:
    sub fgrep_dash_l { my $search = shift; local $/ = $search; my @files; while (my $file = glob("*")) { open my $fh, "<", $file or warn "error opening $file: $!" and ne +xt; defined(my $buffer = <$fh>) or next; push @files, $file if substr($buffer, -length($search)) eq $sear +ch; } return @files; } @arr = fgrep_dash_l("<value>IsFrequentlyUsed</value>");
    Probably not the best WTDI if any of the files are quite large.
Re: Alternative to using system grep
by superfrink (Curate) on Aug 18, 2004 at 15:27 UTC
    By all means us the perl functions / code when you can but I'm just guessing the errors you are seeing with grep might be due to the search path for programs (ie $PATH) not including the directory that contains 'grep'. You could try specifying the full path to grep like
    my @arr = `/usr/bin/grep -l "<value>IsFrequentlyUsed</value>" *`;
Re: Alternative to using system grep
by jsadusk (Acolyte) on Aug 18, 2004 at 21:38 UTC
    Just to throw in my implementation, if I were to write this I wouldn't just give it a directory name, I'd give it an array of files, assuming I could always just use glob("*") as one of its parameters. It'd end up like this:

    sub fgrep_dash_l{ my $pattern = shift; my @searchfiles = @_; my @found; foreach my $file (@searchfiles){ my $FH; open FH, $file or warn "error opening file $file" and next; while(<FH>){ push @found,$file and last if m/$pattern/; } } return @found; }