in reply to Grep Speeds

Everyone has already pointed out the advantages of dbms and databases. But if you just need to process the files once, it is probably most efficient to hash your search strings instead. Like this (untested):
# Build a hash lookup for things I want, defaulting to not there my %found; my @search; foreach my $item (@timearray) { my $id = "$NETID\|$month\/$date\/$year\|$item\|"; push @search, $id; $found{$id} = "||$item||"; } # Scan the file for them my $file = "/PHL/data1/PHL/tmp/ECL_STAT"; open(ECL_STAT, "< $file") or die "Cannot read $file: $!"; while (<ECL_STAT>) { chomp; if (/(([^\|]+\|){3})/ and exists $found{$1}) { $found{$1} = $_; } } # Build output array push @ECL, map $found{$_}, @search;
Now you only need to scan once, and only build in memory data structures for the actual data you are looking for. (Which is coming in the order you are looking for it in. This code would be easier if I could assume that order didn't matter.)