in reply to Best way to search large files in Perl

Again, I'm using the Linux grep command, not Perl grep
The Unix grep and Perl's grep don't really do the same things anyway, the only thing they have in common is the idea of filtering data. But they don't really filter the same type of data: Unix grep filters lines of a file, Perl's grep filters elements of an array or a list (this might be slightly simplified, but that's the idea).

Next the conditions you are reporting are no very clear. You've got a script which is apparently processing a 500 MB log in 35 seconds (not bad), and then you say you get to 90 minutes (almost 200 times more), but with no size indication. Is your new file really 200 times larger? In brief, what are the differences between the 35 seconds and the 90 minutes runs?

Using calls to Linux grep from Perl is usually not considered to be a great idea and may be a problem (each time you do that, you fire a new shell, or two if you pipe commands) or may be totally anecdotic, depending on how many times you do that compared to the data size. And it really depends on how you use your Linux grep. But if you launch the Linux grep 7,000 times over the file, then it is quite likely to be quite inefficient.

This is even more the case of you uncompress the file for each search. It would almost certainly be better to uncompress the file only once, and then look for your data in the uncompressed version.

But the bottom line is that we would need much more information to really suggest a better solution. Basically, you should show us the code and a small sample of the data.