in reply to Is this Possible?

$. is a special variable that yields the number of the most recently read record from a file. Typically that means the most recently read line. So if you were reading the file line-by-line, then it would be useful. It looks more like you've slurped the file into the @log array. In that case, $. will be useless for your purposes.

grep can do what you want. Instead of grepping over @Log, grep over the indices of @Log, and if you wish, later you can map those indices back to the actual values represented at their positions within @Log.

I started working on a solution to post for you, but quickly discovered other problems that raised questions I would need answers before I would be able to post a correct solution. For example:

I'll take a stab at it with a new approach:

sub process_log { my ($log, $pattern) = @_; my @index = grep {$log->[$_] =~ m/^$FAILURE_SEARCH /} 0 .. $#$log; print "INFO: $log->[$_]This occurred on line $_\n" foreach @index; return join '', map {$log->[$_]} @index; } my $problems = process_log(\@Log, $FAILURE_SEARCH);

(Untested)

In my opinion, this is a better way:

sub process_log { my ($log_fh, $pattern) = @_; my @issues; while( my $line = <$log_fh> ){ if( $line =~ m/^$pattern / ) { chomp $line; print "INFO: $line\nThis occurred on line $.\n"; push @issues, { log_entry => $line, line_num => $. }; } } return @issues; }

Here our function takes a filehandle and a pattern. We read the file and process it one line at a time. If the line matches the failure pattern, we print a message, and push the line, and the line number as an anonymous hash into an array containing all of the issues discovered. At the end, we return the issues array so that the caller can inspect it further if needed.


Dave