Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hi monks, I would like to know can I use grep (with other function) to mimic the GNU grep -n, which return the array index as line number with or without the content?

Also, when matching a string with fixed strings, which one will be faster?

1. GNU grep -F (or fgrep) called from perl (via system() or backticks)

2. perl grep()

Thank you.

  • Comment on grep return the index instead of the element content, and its speed

Replies are listed 'Best First'.
Re: grep return the index instead of the element content, and its speed
by ikegami (Patriarch) on Feb 28, 2008 at 09:38 UTC

    Instead of

    my @filtered = grep { condition($_) } @elements;

    I think you're asking for

    my @indexes = grep { condition($elements[$_]) } 0..$#elements;

    I don't think you'll get much speed out of that unless you're dealing with very long strings.

    Update: You could also use one of the following. They use much less memory and they're probably faster.

    my @filtered; for (@elements) { # Not any list, an array specifically. push @filtered, $_ if condition($_); }

    or

    my @indexes; for (0..$#elements) { push @indexes, $_ if condition($elements[$_]); }

      Hi ikegami,

      Thank you your quick reply.

      Question for your update: why the for loop version use much less memory?

      Also, will the use of direct access the array index make the grep slower?

      (* the speed is much important than the memory as I need to repeat this action with thousand times, and the string in the array for searching usually has hundred of words...)

        why the for loop version use much less memory?

        for (x..y) loop and for (@array) loop (but no other kind of for (LIST) loops) are optimized to iterate over the list without flattening it.

        $n = 10_000_000; print ": "; <>; # 2MB $f = 1; for (0..$n-1) { if ($f) { $f = 0; print ": "; <>; } } # 2MB
        $n = 10_000_000; print ": "; <>; # 2MB push @a, --$n while $n; print ": "; <>; # 240MB $f = 1; for (@a) { if ($f) { $f = 0; print ": "; <>; } } # 240MB

        grep { ... } LIST flattens the list, so grep { ... } @array loads the entire array* on the stack and grep { ... } 0..$#array; create a list in memory as big as the array.

        $n = 10_000_000; print ": "; <>; # 2MB $f = 1; grep { if ($f) { $f = 0; print ": "; <>; } 0 } 0..$n-1; # 240MB <--
        $n = 10_000_000; print ": "; <>; # 2MB push @a, --$n while $n; print ": "; <>; # 240MB $f = 1; grep { if ($f) { $f = 0; print ": "; <>; } 0 } @a; # 280MB <--

        * - The amount of memory taken is proportional to the number of elements. It doesn't matter how much memory each of those element takes.

        Also, will the use of direct access the array index make the grep slower?

        I don't completely understand what you said, but the answer is clear: Write a Benchmark test to find out.

        Updated: Added code snippets.

Re: grep return the index instead of the element content, and its speed
by grinder (Bishop) on Feb 28, 2008 at 10:38 UTC

    Reading a file line by line imposes an overhead by its very nature. If the files are smallish in nature compared to the machine you're running on, you should slurp them into memory and then operate upon their contents as a string. This alone will be a win compared to building an array, over and above the line-oriented cost.

    You can then do a quick check on the entire contents to see if you find the string anywhere. This uses a fast Boyer-Moore algorithm, and can weed out files that don't match quickly. If you expect 99% of files to contain what you're searching for, this will probably be a net loss, so you should comment out the next statement (see below).

    To find the line number, you just have to match the section before your match, the line containing the match itself, and count the intervening line-breaks afterwards. The regexp below is probably a bit horrible in terms of performance. No doubt a better regexp hacker than I would be able to come up with something that doesn't backtrack:

    #! /usr/local/bin/perl use strict; use warnings; my $target = shift; $target = qr/$target/; for my $file (@ARGV) { my $str = do {local $/ = undef; open my $in, '<', $file; <$in>}; next unless $str =~ /$target/; my $current = 1; while ($str =~ /((?:[^\n]*?\n)*?)(.*?$target[^\n]*)/g) { $current += $1 =~ tr/\n/\n/; print "$file($current): $2\n"; } }

    I would expect this to run faster than a grep-based version.

    • another intruder with the mooring in the heart of the Perl

      my $index = 0; my $i = 0; my $key = 10; my @array = .... my $success = grep {$array[$_] eq $key && ($in = $_)} 0..$#array; $success is 0 if not found, 1 if found $in contain the offset index