I am using the above code to read all the files from a directory and storing it in a hashformat like, Key is the "filename" contents are its values. but it has more than 200 files each file contains more thann 2000 lines.opendir ( SSPR , "/apps/inst1/metrica/TechnologyPacks/ON-SITE/summarys +pr/") or die "$!"; while ( defined ( $file_name = readdir(SSPR) ) ) { next if ( -d $file_name ); # removing . and .. open ( FH , "/apps/inst1/metrica/TechnologyPacks/ON-SITE/summarysp +r/$file_name" ) or die "$!"; $sspr_hash{$file_name} = []; @{$sspr_hash{$file_name}} = <FH>; map { $_ =~ s/[\n\r]//g } @{$sspr_hash{$file_name}}; }
In the above code each filename is read in a loop and the lines are read using another inner loop. The performance of the code is pretty slow. How can I improve the efficiency and increase the performance of code. please response me with your tips based on your experience you have met earlier with this kind of code.# this loop is to analysed the schema from summary spr files foreach $file_name ( keys %sspr_hash ) { foreach $line ( @{$sspr_hash{$file_name}} ) { if ( grep ( /$old_schema/i, $line ) ) { print "$old_schema|$new_schema_str|$found_status|$migr +atestr|$rename_str|$file_name\n"; } } }
In reply to Improving the efficiency of code when processed against large amount of data by greatshots
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |