I concur on grep. Most have an option to accept perl REs, so you wouldn't even have to do any mods. However, your vague description seems to indicate GBs worth of text to search. You don't mention how often it has to run. If grep, for some reason, is not an answer, I would first baseline 'slow'. Write a small program that just reads in all the lines of all the files you need to process. You obviously aren't going to get any faster than that, using perl.
I assume these files are actually being created on many different machines. Can you add a small program to each that processes each log file as it is created (ie spread the pain)? If Linux, just 'tail -f' the log file and pipe it to your parser.