in reply to Parsing Large Text Files For Performance

First, try just a null reading loop in perl - this will give you a lower bound on what the Perl can be optimised to:
while (<FILE>) { }
If this isn't fast enough, then you'll definitely have to use something other than Perl, at least for the initial filtering. If it is fast enough, then the next thing is to reject unwanted lines as quickly as possible. You probably want to do that with a single pre-compiled regexp:

my $re = qr/^$year-.{21} IP (?:\Q$IP1\E > \Q$IP2\E|\Q$IP2\E > \Q$IP1\E +/; while (<FILE>) { next unless /$re/; ... do something ... }

The exact form of the pattern will depend on how flexible the the 1st line of each entry is, e.g. whether it uses a fixed number of digits for the sub-seconds value

Dave.