This all depends on what you mean by "being written". On POSIXy systems, syswrite, mtime & friends should be regarded as atomic (but probably not recovering from errors), so any OS-level write should update mtime, filesize etc - regardless of if the write is actually written to disk yet.
OS-level caching is completely transparent to these operations. Which also means that there's no such thing as "being written" - the data has either been written (given no errors occurred), or none of it has. From the POV of these operations there is no in between.
For instance:
#!/usr/local/bin/perl -w
use strict;
open F,">/tmp/test" or die $!;
for (1 .. 1000) {
syswrite F,".";
my $s = -s F;
warn "$s != $_" if $s != $_;
}
The code above reports no anomalies on my system.
Confusion may set in because perl does some additional caching by itself (when using the canonical print(f) function):
#!/usr/local/bin/perl -w
use strict;
open F,">/tmp/test" or die $!;
for (1 .. 1000) {
print F ".";
my $s = -s F;
warn "$s != $_" if $s != $_;
}
Gives:
0 != 1 at test.pl line 7.
0 != 2 at test.pl line 7.
0 != 3 at test.pl line 7.
etc etc.
See also syswrite.
updated to fix typo in second example
update 2: bottom line:
If your remote program sends write()s to the system often enough, and/or your peeking of mtime etc is infrequent enough, your strategy should work. But any program doing fairly heavy caching by itself (and I would guess many interesting programs do) may throw it off. So be conservative when choosing your intervals.
|