in reply to Re^6: Trying to optimize reading/writing of large text files.
in thread Trying to optimize reading/writing of large text files.

Finally I've done the benchmarks.
And, surprisingly, line-by-line "WHILE()" method in Version#2 wins!
Version#1 consumed 208Mb of RAM vs 70Mb used by Version#2.

Not only it saves a lot of memory, but also it is 20% faster!

My benchmark script tries both methods to read-write 100Mb file in 5 cycles and prints results in milliseconds. Here are the results:
------------------------------------ "ARRAY" BEST TIME : 1.227776 "ARRAY" TOTAL TIME: 6.73529 "WHILE" BEST TIME : 1.103754 "WHILE" TOTAL TIME: 5.71099 ------------------------------------

benchmark script code is:
#!/usr/bin/perl -w ########################################## # # # PURE PERL FILE WRITE BENCHMARK # # METHODS TESTED: ARRAY vs WHILE # # # ########################################## use strict; my $TEST_FILE_SIZE_MB = 100; my $PASSES = 5; eval('use Time::HiRes;'); if ($@) { error('Couldn\'t load required libraries.'); } my $file = "./test.txt"; my $tempfile = $file.'.tmp'.int(rand()*99999); my $flagfile = $file.'.lock'; my $log; &testfilecheck; my $debug; my ($best_time_array,$best_time_while, $total_time_array, $total_time_ +while); for (my $x=0; $x < $PASSES; $x++){ my ($result,$dbg) = use_while(); $total_time_while+=$result; $best_time_while = $result if ($best_time_while > $result || !$bes +t_time_while); $debug.=$dbg."\n"; } sleep 1; for (my $x=0; $x < $PASSES; $x++){ my ($result,$dbg) = use_array(); $total_time_array+=$result; $best_time_array = $result if ($best_time_array > $result || !$bes +t_time_array); $debug.=$dbg."\n"; } print "Content-type: text/plain\n\n"; print <<EF; "ARRAY" BEST TIME : $best_time_array "ARRAY" TOTAL TIME: $total_time_array "WHILE" BEST TIME : $best_time_while "WHILE" TOTAL TIME: $total_time_while ---------------------------------- EF exit; sub testfilecheck{ unless (-e $file){ open (NEW, ">$file"); for (my $i=0; $i < $TEST_FILE_SIZE_MB*1000; $i++){ my $rnd; for (my $y=0; $y < 988; $y++){ $rnd.=int(rand()*9); } print NEW $rnd.'|'.time()."\n"; } close NEW; } } sub use_array{ my $startexectimemilliseconds = [ Time::HiRes::gettimeofday( ) ]; my ($debug,$count,$lastline); open (DAT, "+<$file"); flock DAT, 2; my @DATfile=<DAT>; seek (DAT, 0, 0); truncate (DAT,0); foreach my $line(@DATfile){ chomp ($line); my $replace = '|'.time(); $line=~s/\|\d+$/$replace/; print DAT $line."\n"; $lastline = $line; $count++; } close DAT; my $elapsedtime = Time::HiRes::tv_interval( $startexectimemillisec +onds ); $debug=<<EF; method: ARRAY exec: $elapsedtime count: $count $lastline EF return ($elapsedtime,$debug); } sub use_while{ my $startexectimemilliseconds = [ Time::HiRes::gettimeofday( ) ]; my ($debug,$count,$lastline); open (LOCK, "<$flagfile") || open (LOCK, ">$flagfile"); flock LOCK, 2; open (DAT, $file); flock DAT, 2; open (TMP, ">$tempfile"); flock TMP, 2; while (my $line = <DAT>){ chomp ($line); my $replace = '|'.time(); $line=~s/\|\d+$/$replace/; print TMP $line."\n"; $lastline = $line; $count++; } close TMP; close DAT; rename($tempfile,"$file"); close LOCK; my $elapsedtime = Time::HiRes::tv_interval( $startexectimemillisec +onds ); $debug=<<EF; method: WHILE exec: $elapsedtime count: $count $lastline EF return ($elapsedtime,$debug); }

By the way, I've completed several heavy stress tests by launching tens of script instances at once and there were no problems with file integrity. Every script waited until previous instance finish working with the file. Thanks for your tip about using flag file!

Also, I realized that version#2 is more stable against hardware crashes or loss of power. Even if HDD shuts down during write operation, there are always two copies of the file - DAT and TMP, and data always can be recovered from one of them.

Replies are listed 'Best First'.
Re^8: Trying to optimize reading/writing of large text files.
by Marshall (Canon) on Jan 25, 2012 at 07:04 UTC
    Great job on doing some benchmarks!
    Benchmark is a core module (meaning that is included in all Perl installations without you having to install it yourself). This can simplify further benchmarking code.

    I haven't looked in detail at your new code, but Version #1 made an in memory copy of the file. It is not surprising, and was expected that not doing that would save memory! With 100MB files, a 20% performance gain is also plausible (copying stuff around can be expensive).

    Data recovery is something that we didn't talk about, but if it is even remotely possible that something "can go wrong", it will "eventually go wrong" if you do it enough times!