in reply to Re: Reduce the time taken for Huge Log files
in thread Reduce the time taken for Huge Log files

consolidate_logs function could be then optimized to this
sub consolidate_logs ($$$) { my ($destination_file, $dir, $filename_str) = @_; my @files = get_matching_filenames($dir, $filename_str); open(OUT,"> $destination_file") or die "Could not open file \"$des +tination_file\" for writing"; foreach my $source_file (@files) { print "Processing of log \"$source_file\" started at " . local +time() . "\n"; system("cat $dir/$source_file >> $destination_file"); print "Processing of log \"$source_file\" ended at " . localti +me() . ".\n"; } close(OUT); }
(using "cat" program instead of perl code for simply transferring large quantities of data)
or even to this
sub consolidate_logs ($$$) { my ($destination_file, $dir, $filename_str) = @_; system("ls $dir | grep $filename_str | xargs -iX cat $dir/X >> $de +stination_file"); }
split_logs function could be simplified to this
sub split_logs ($$$) { my ($source_file, $business_list, $filename_prefix) = @_; foreach my $business (@$business_list) { my ($name, $file) = @$business; my $outfile = "/inside29/urchin/test/newfeed/$filename_prefix- +$file"; print "Creating of log for $name started at " . localtime() . + "\n"; system("grep \"$name\" $source_file >> $outfile"); print "Log for $name created at " . localtime() . "\n"; } }
again - using external program ("grep" this time) for simple string matching but in big quantities of data.

bartek