sarutobi has asked for the wisdom of the Perl Monks concerning the following question:
i wish to optimize this code specially the cross checking section. I will be handling log in sizes of 100 MB and error lines of 1000s . and my approach is slow. any idea on improvement is highly appreciated. thank you%g_wvr_list = (); %g_log_err = (); @tmpArr = (); #get errors from waiver list open(IFP0,"<$opt_wvr_file"); @tmpArr=<IFP0>; foreach $el (@tmpArr){ chomp($el); $g_wvr_list{"$el"}=1; } close(IFP0); #get errors from test log file @tmpArr=`grep -iw error $opt_test_log`; $errorCnt=$#tmpArr+1; #pass /fail $waived=0; printDbg(@tmpArr); foreach $key ( keys(%g_wvr_list) ){ printDbg("wvr List $key"); @matchErr = grep( /\Q$key\E/ , @tmpArr ); printDbg("matchErrCnt {$#matchErr}"); printDbg("matchErrCnt {@matchErr}"); $waived+= @matchErr if( $#matchErr + 1 > 0 ); }
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: efficient text processing
by moritz (Cardinal) on Nov 12, 2008 at 19:42 UTC | |
|
Re: efficient text processing
by gone2015 (Deacon) on Nov 12, 2008 at 19:50 UTC |