At work, we have a large zipped log file (over 500MB) on a Linux server. We have a Perl script that generates a report using information from the log file. I'm new to Perl, I inherited the script. Everything ran ok (processing took about 35 seconds), then we increased volume. Now the same script takes 90 minutes due to the volume of data. I'm using several calls to the Linux grep command from within my script. Is there a faster way to do this using only Perl and not the Linux command, or is this the best way?
Some additional detail...I first get a list of unique things I'm interested in, similar to a product id(list contains about 7000 unique items). Then, I iterate the big log file one time using regex to find lines I'm interested in for gathering additional data about each product id. I write those lines out to a few new (smaller) files. Then, I loop through the product ID list one time, and execute several different grep commands on the new smaller files I created. Again, I'm using the Linux grep command, not Perl grep, like this:
Thanks
In reply to Best way to search large files in Perl by ccmadd
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |