One optimization depends on what $table and the lines in fg_log look like. If you can pre-sort the strings you are looking for into a fixed amount of buckets, you can check the $cur_line for which buckets could possible match it and so avoid matching against all the unsuitable buckets
As an example lets assume that $table always holds 10 digit numbers and each fh_log line consists of text and a few of those 10 digit numbers inbetween.
Then you could put all numbers beginning with 00 into bucket 00, all the numbers beginning with 01 into the bucket 01 and so on. Then you just need to extract the numbers from every line and check them only against the numbers in the suitable bucket.
The same optimization could be done if you always look for whole words, or sequences that always have a '-' inbetween (use the char before and after the '-') and a lot of other situations, you just need some structure in your data you can take advantage of.
In reply to Re: Perl Optimization
by jethro
in thread Perl Optimization
by Chivalri
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |