Fellow monks,
Below is a script (write.pl) to write 100k lines to a log file, with some random deliberate delays between print():
#!/usr/bin/perl use autodie; use Time::HiRes qw(nanosleep); unless (@ARGV == 2) { die "Usage: $0 <path> <str>\n"; } $|++; for (1..100_000) { open my($fh), ">>", $ARGV[0]; nanosleep(1_000_000*rand()); print $fh "$ARGV[1]: $_\n"; nanosleep(1_000_000*rand()); close $fh; }
I run 100 instances of this script:
$ for i in `seq 1 100`;do ( ./write.pl log $i & );done
As you can see, I'm trying to produce a race condition where a process' write clobbers another's. However, I cannot seem to do so. All lines from all processes are written to log file. This is tested using:
$ wc -l log $ grep ^1: log | wc -l $ grep ^2: log | wc -l ...
What am I doing "right"? Is it really safe then to open in append mode, write once, then close again, without any lock (I'm thinking no way)? Only on certain platforms? FYI, I'm writing a File::Write::Rotate module (something like Log::Dispatch::FileRotate but more lightweight and not tied to Log::Dispatch).
BTW, strace shows that perl delays the write (it should've been before the second nanosleep()):
open("log", O_WRONLY|O_CREAT|O_APPEND, 0666) = 3
lseek(3, 0, SEEK_END) = 4750343
ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fffc9877930) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 4750343
fstat(3, {st_mode=S_IFREG|0644, st_size=4750370, ...}) = 0
fcntl(3, F_SETFD, FD_CLOEXEC) = 0
nanosleep({0, 992359}, 0x7fffc9877cf0) = 0
nanosleep({0, 347917}, 0x7fffc9877cf0) = 0
write(3, "95: 3281\n", 9) = 9
close(3) = 0
Debian Linux amd64, kernel 3.2.0, Perl 5.14.2
. pathstrIn reply to Safe to open+close for every write, without locking? by sedusedan
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |