in reply to Reduce the time taken for Huge Log files

As mentioned it is hard to read your code, it is long, it does not use strict and warnings, it does not adhere to the smallest possible test case ideal. That said a couple of hints to tidy things up.

It does look like you are taking each business and then comparing all the file entries against it before reading the next business off the array. This means you are reading the input files $number_of_business times. Reading the file is slow, itterating through an array held in memory is fast.

perl -MPOSIX -le'print strftime "%X %x", localtime(time)' 12:28:59 18/03/05
If I missed the boat completely sorry.

Hopefuly this code can give you a couple of ideas ...

#!/usr/bin/perl use warnings; use strict; my @businesses=( ['foo', 'bar', 'baz'], ['een', 'twee', 'drie'], ['ichi', 'ni', 'san'], ['hydrogen', 'helium', 'lithium'], ); my %regexen; foreach my $group (@businesses) { print "making regex from group @{$group} ... "; my $regex=join "|", @$group; print "\\$regex\\\n"; my $compiled_re=qr/$regex/; $regexen{$regex}=$compiled_re; } while (my $line = <DATA>) { for my $group (keys %regexen) { next unless $line =~ /$regexen{$group}/; print "The line $line matched the bussiness group $group\n"; } } __DATA__ nosuch foo this that helium ballon ichi foot een

Cheers,
R.

Pereant, qui ante nos nostra dixerunt!