in reply to Re^3: Trying to optimize reading/writing of large text files.
in thread Trying to optimize reading/writing of large text files.

I'm sure that DB is the best solution, but this script is running on very restricted environment where is no access to SQL or even to CPAN modules. So I have to code it in pure Perl.

LOG file is (potentially) heavily accessed by numerous script instances. About 90% of times it's READ access, and 10% is READ-MODIFY-OVERWRITE process. The code we are discussing here is READ-MODIFY-WRITE part of the program. Actually i used "flock LOG, 1" because I wanted to let other instances have read-only access to the LOG (even if it's contents is outdated).

It's good idea about "flag" file. I think it would finally resolve a problem with possible file corruption. But it adds one more file operation and may potentially affect performance. So, I'm going to experiment little bit and benchmark different versions of this code to see what is the best.

By the way, I'm still in doubt if lock established by "flock LOG, 1" will be removed by rename() operation. And this is very important to know. When i wrote Version#2, I supposed that rename() operation uses system functionality to physically overwrite LOG with TEMP (i.e. doesn't interfere with flock). And, since LOG is opened as read-only, and flock function is virtual and affects only cooperating scripts, there is probably a chance that involved scripts will continue to obey this LOCK until it's unlocked by close() operation, even if the file was physically overwritten.
Are these assumptions mistaken?
  • Comment on Re^4: Trying to optimize reading/writing of large text files.

Replies are listed 'Best First'.
Re^5: Trying to optimize reading/writing of large text files.
by nikkimouse (Initiate) on Jan 23, 2012 at 03:45 UTC
    The post above is my post. The session was just expired and I was recognized as "Anonymous Monk" :)

    A little update: I have unmodified Version#2 running for several hours under heavy load (10-15 scripts at once) and there are still no file corruptions. But probably it's just a luck. My question about interference between flock() and rename() is still open...
      So, I'm going to experiment little bit and benchmark different versions of this code to see what is the best.

      If performance matters, this is always a good idea!

      For what you want to do, getting a "read lock" on LOG, basically means nothing. You need an exclusive lock. There is no need to get any kind of lock on the temp file - should be a unique file anyway. I mean if it is a unique file, for your own access, nobody else is going to mess with it.

      You haven't explained much (actually nothing) about what LOG does in terms of IPC except that this file is used for IPC (Inter Process Communication).

      There is a difference between "guaranteed to work all of the time" and "very high probability of working".

      My question about interference between flock() and rename() is still open.

      If the file is closed, the lock is released. You cannot have a lock unless the file is open. You cannot rename x=>y unless y doesn't exist. If your process relies upon a "write" lock on y, this won't work (all of the time) because you have to delete "y" before re-naming x=>y. If your OS allows x to replace an existing file y, then I'd like to see a Perl example.

      rename as like all file operations, can fail -- check the return status.

        Finally I've done the benchmarks.
        And, surprisingly, line-by-line "WHILE()" method in Version#2 wins!
        Version#1 consumed 208Mb of RAM vs 70Mb used by Version#2.

        Not only it saves a lot of memory, but also it is 20% faster!

        My benchmark script tries both methods to read-write 100Mb file in 5 cycles and prints results in milliseconds. Here are the results:
        ------------------------------------ "ARRAY" BEST TIME : 1.227776 "ARRAY" TOTAL TIME: 6.73529 "WHILE" BEST TIME : 1.103754 "WHILE" TOTAL TIME: 5.71099 ------------------------------------

        benchmark script code is:
        #!/usr/bin/perl -w ########################################## # # # PURE PERL FILE WRITE BENCHMARK # # METHODS TESTED: ARRAY vs WHILE # # # ########################################## use strict; my $TEST_FILE_SIZE_MB = 100; my $PASSES = 5; eval('use Time::HiRes;'); if ($@) { error('Couldn\'t load required libraries.'); } my $file = "./test.txt"; my $tempfile = $file.'.tmp'.int(rand()*99999); my $flagfile = $file.'.lock'; my $log; &testfilecheck; my $debug; my ($best_time_array,$best_time_while, $total_time_array, $total_time_ +while); for (my $x=0; $x < $PASSES; $x++){ my ($result,$dbg) = use_while(); $total_time_while+=$result; $best_time_while = $result if ($best_time_while > $result || !$bes +t_time_while); $debug.=$dbg."\n"; } sleep 1; for (my $x=0; $x < $PASSES; $x++){ my ($result,$dbg) = use_array(); $total_time_array+=$result; $best_time_array = $result if ($best_time_array > $result || !$bes +t_time_array); $debug.=$dbg."\n"; } print "Content-type: text/plain\n\n"; print <<EF; "ARRAY" BEST TIME : $best_time_array "ARRAY" TOTAL TIME: $total_time_array "WHILE" BEST TIME : $best_time_while "WHILE" TOTAL TIME: $total_time_while ---------------------------------- EF exit; sub testfilecheck{ unless (-e $file){ open (NEW, ">$file"); for (my $i=0; $i < $TEST_FILE_SIZE_MB*1000; $i++){ my $rnd; for (my $y=0; $y < 988; $y++){ $rnd.=int(rand()*9); } print NEW $rnd.'|'.time()."\n"; } close NEW; } } sub use_array{ my $startexectimemilliseconds = [ Time::HiRes::gettimeofday( ) ]; my ($debug,$count,$lastline); open (DAT, "+<$file"); flock DAT, 2; my @DATfile=<DAT>; seek (DAT, 0, 0); truncate (DAT,0); foreach my $line(@DATfile){ chomp ($line); my $replace = '|'.time(); $line=~s/\|\d+$/$replace/; print DAT $line."\n"; $lastline = $line; $count++; } close DAT; my $elapsedtime = Time::HiRes::tv_interval( $startexectimemillisec +onds ); $debug=<<EF; method: ARRAY exec: $elapsedtime count: $count $lastline EF return ($elapsedtime,$debug); } sub use_while{ my $startexectimemilliseconds = [ Time::HiRes::gettimeofday( ) ]; my ($debug,$count,$lastline); open (LOCK, "<$flagfile") || open (LOCK, ">$flagfile"); flock LOCK, 2; open (DAT, $file); flock DAT, 2; open (TMP, ">$tempfile"); flock TMP, 2; while (my $line = <DAT>){ chomp ($line); my $replace = '|'.time(); $line=~s/\|\d+$/$replace/; print TMP $line."\n"; $lastline = $line; $count++; } close TMP; close DAT; rename($tempfile,"$file"); close LOCK; my $elapsedtime = Time::HiRes::tv_interval( $startexectimemillisec +onds ); $debug=<<EF; method: WHILE exec: $elapsedtime count: $count $lastline EF return ($elapsedtime,$debug); }

        By the way, I've completed several heavy stress tests by launching tens of script instances at once and there were no problems with file integrity. Every script waited until previous instance finish working with the file. Thanks for your tip about using flag file!

        Also, I realized that version#2 is more stable against hardware crashes or loss of power. Even if HDD shuts down during write operation, there are always two copies of the file - DAT and TMP, and data always can be recovered from one of them.