Thanks, that seems simple enough, it works and only takes about 8 seconds so should be good. I don't know if my interpretation of the suggestion is optimal but it got me to a solution, I just used an array as my data didn't need a keyed hash. The files are fairly basic text, less than 40,000 lines.
#!usr/bin/perl
use warnings;
use List::MoreUtils qw(any);
open my $wave, '>', 'Wave' or die "Can't open $wave: $!";
open my $keywords, '<', 'Agents' or die "Can't open keywords: $!";
open my $search_file, '<', 'Definitions' or die "Can't open search f
+ile: $!";
open my $schedule, '<', 'Schedule' or die "Can't open search file: $
+!";
@sched = <$schedule>; # read entire file into an array at the start
my $keyword_or = join '|', map {chomp;qr/\Q$_\E/} <$keywords>;
my $regex = qr|\b($keyword_or)\b|;
while (<$search_file>)
{
while (/$regex/g)
{
$line = $_;
chomp $line;
if ( $line =~ /(SCRIPTNAME|DESCRIPTION)/ ) {
next;
}
print $wave $line;
if (any { $_ =~ $line } values @sched) { # check if line we're
+ on is also in the @sched array
print $wave " | Yes!\n";
}
else{
print $wave " | No!\n";
}
}
}
$_->close for $wave, $keywords, $search_file, $schedule;
| [reply] [d/l] |
Would you show partial contents of the files? Maybe first 10 lines or whatever you think will give an accurate subset of the data. Also you should "use strict;". That would among other things spot this problem "Can't open $wave: $!".
| [reply] |