in reply to Re: Using system (); with Strawberry Perl
in thread Using system (); with Strawberry Perl

File::Grep was slow because I used fgrep instead of just grep ... I'll test out your code later in the day when I have enough test files
  • Comment on Re^2: Using system (); with Strawberry Perl

Replies are listed 'Best First'.
Re^3: Using system (); with Strawberry Perl
by Marshall (Canon) on Nov 26, 2021 at 20:04 UTC
    This code print $OUT "$filename\n" if grep{/DATAmessage.*3\.0/}<$in>; is slow because it continues to read the file even after it has found the first occurrence of the regex match (it actually counts the number of occurrences in the file). To use hippo's idea: add use List::Util qw(any); at the top. And change code to print $OUT "$filename\n" if any{/DATAmessage.*3\.0/}<$in>;.

    the "any" routine is written in C. The Perl equivalent is like this:

    while (<$in>) { if (/DATAmessage.*3\.0/) { print $OUT "$filename\n"; last; #no need to look anymore! } }
    If whatever you are looking for usually appears near the beginning of the file, performance gain will be substantial.

    update:
    Another place to use a List::Util function:

    { my %unique; print $OUT sort grep { ! $unique{ $_ }++ } <$IN>; } ##### again use List::Util to speed up Perl implementation... #### use List::Util qw(any uniq); print $OUT sort uniq <$IN>;
    I suppose that depending upon the data, it could be that reversing the order, i.e., sorting and then filtering out uniq lines would be faster? Don't know. But if speed is needed, I would also benchmark that approach. Also, instead of building a hash table, try: "print line unless its a repeat of previous line". Results probably depend upon what typical data actually looks like. For example:
    my $prev = ""; foreach (sort <$IN>) { print unless $_ eq $prev; $prev = $_; }