Beefy Boxes and Bandwidth Generously Provided by pair Networks
P is for Practical
 
PerlMonks  

fsyncing directories

by betterworld (Curate)
on Apr 27, 2010 at 10:15 UTC ( [id://837072]=perlquestion: print w/replies, xml ) Need Help??

betterworld has asked for the wisdom of the Perl Monks concerning the following question:

Hi,

according to the Linux man page of fsync(2):

Calling fsync() does not necessarily ensure that the entry in the directory containing the file has also reached disk. For that an explicit fsync() on a file descriptor for the directory is also needed.

So, how do we fsync directories in Perl? perldoc POSIX says that fsync() should be done with the sync method in IO::Handle. So I tried this with a directory:

use IO::Handle; open my $f, '/tmp' or die $!; $f->sync or die "fsync: $!\n";

This bails out with "fsync: Invalid argument". According to strace, no fsync syscall is even tried.

I tried opendir too:

use IO::Handle; my $f = IO::Handle->new; opendir $f, '/tmp' or die $!; $f->sync or die "fsync: $!\n";

with the same result.

On the other hand, an equivalent C program succeeds and properly calls fsync according to strace:

#include <unistd.h> #include <stdlib.h> #include <stdio.h> #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> int main (void) { int fd = open("/tmp", 0); if (fd < 0) { perror("open: /tmp"); exit(1); } if (fsync(fd) < 0) { perror("fsync: /tmp"); exit(1); } return 0; }

My question: What is the proper way to do fsync with directories in Perl? Or is this just a bug in IO::Handle?

(All tests have been done on a current Debian Squeeze (testing) with perl 5.10.1-12.)

Replies are listed 'Best First'.
Re: fsyncing directories
by salva (Canon) on Apr 27, 2010 at 12:18 UTC
    Or is this just a bug in IO::Handle?

    Mostly yes, IO::Handle::sync requires a file handle open with write access.

    The following workaround may work:

    use IO::Handle; sysopen my $dh, "/etc/", 0 or die "unable to open dir"; open my $dh1, ">&", $dh or die "unable to dup handle"; $dh1->sync or die "dir sync failed";

      Thank you, this is a nice workaround :) IO::Handle accepts this file handle because it thinks that it is opened for writing.

      From the strace output it seems that this does in fact fsync the directory. Excerpt:

      open("/etc/", O_RDONLY) = 3 dup(3) = 4 fsync(4) = 0

      I don't think the dup() hurts the functionality. Though of course I cannot be sure because I'd have to crash my system repeatedly to see if the effect is the same.

Re: fsyncing directories
by ikegami (Patriarch) on Apr 27, 2010 at 12:51 UTC

    IO::Handle's sync requires an output file handle. I don't know if Perl provides a means of getting an output file handle to a directory. Otherwise, it's just a wrapper for C's fsync.

    Update: You can get one via dup:

    $ perl -MIO::Handle -e' open(my $fh, "<", $ARGV[0]) or die; open(my $ofh, ">&", $fh) or die; $ofh->sync or die; print "success\n" ' some_dir success
Re: fsyncing directories
by Anonymous Monk on Apr 27, 2010 at 11:33 UTC
    Have you tried File::Sync?

      Thanks, File::Sync seems to work:

      use File::Sync qw(fsync); open my $f, '/tmp' or die $!; fsync($f) or die "fsync: $!\n";

      I should have done a CPAN search myself...

      However, IO::Handle is a core module, File::Sync isn't. As IO::Handle already has a sync() method (and perldoc POSIX documents that this is how to do fsync), it seems a bit excessive to pull in CPAN modules just because your file handle is a directory handle...

Re: fsyncing directories
by JavaFan (Canon) on Apr 27, 2010 at 12:38 UTC
    Or is this just a bug in IO::Handle?
    Probably. I'd even argue that the Perl way is to do the right thing. So, an IO::Handle -> sync (where the handle is a file handle) call should first do a flush (to flush out the API buffers), then a file sync, then directory sync.

    But no doubt there's some code out there somewhere that prevents a change towards DWIM behaviour.

Re: fsyncing directories
by ig (Vicar) on Apr 27, 2010 at 17:07 UTC

    I wonder what your objective is. For a thought provoking discussion (not all happy thoughts, unfortunately) you might have a look at https://bugs.edge.launchpad.net/ubuntu/+source/linux/+bug/317781 - you might skim through to entry 45 by Theodore Ts'o. In short, not even sync, much less fsync, is adequate for ensuring your data is written to disk and reliably retrievable, though both/either might reduce your risk.

      I had that particular bug tracker item open in my browser earlier today :)

      However it is quite long and I did not read everything in that discussion. The topic is the difference between ext3 and ext4, particularly their features data=ordered and delayed allocation. Where does it say that not even sync, much less fsync, is adequate for ensuring your data is written to disk and reliably retrievable? The entry that you reference (45) states quite the opposite imho. On ext4 (especially when replacing files on pre-linux-2.6.30 or something like that) you must fsync() your files at the appropriate times to prevent a high risk of data loss. On ext3 it is (a bit) helpful too, though it might slow things down.

      I wonder what your objective is.

      I was just reading that passage in the man page (which I quoted in my original post) and was wondering how to do that in Perl.

      Update: That entry 45 has a lot of common text with this blogpost, which I found quite enlightening.

        Where does it say that not even sync, much less fsync, is adequate for ensuring your data is written to disk and reliably retrievable?

        Sorry - I read that thread a long time ago and I summarized my recollection of a lot of reading stimulated by it. In comment #56 Theodore says, of his recommended best practice for syncing from the application: It is not fool-proof, but then again ext3 was never fool-proof, either. But he doesn't say much about what the remaining risks are.

        A point that was well made elsewhere (e.g. http://www.hitachigst.com/hdd/technolo/writecache/writecache.htm (see the first paragraph under "What is write caching?") and http://old.nabble.com/ext4-finally-doing-the-right-thing-td27186399.html) is that not only the operating/file system buffers data and potentially reorders operations - many disk drives have intelligent controllers that also buffer data and reorder writes, and the disk controllers don't necessarily write their data when you issue a sync at the operating/file system level.

        Risk of data loss or corruption can be reduced by calling the sync functions from the application at critical points - I don't mean to discourage doing so - it can be good practice. But don't get carried away or system performance may be adversely impacted.

        In addition, some file system operations create more risk than others, and consideration should be given to how the file system is used, as well as when it is synchronized to disk.

        Finally, for a good balance of reliability and performance, there are many configuration settings that should be considered, in the operating system, file system, RAID and virtual disk system (be they encrypting, compressing or whatever) and in the disk drives and interfaces.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://837072]
Approved by ww
Front-paged by ww
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others imbibing at the Monastery: (8)
As of 2024-03-28 12:20 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found