in reply to Re^5: using lexically scoped variable as a filehandle
in thread using lexically scoped variable as a filehandle

I don't know about you, but very few of the systems I work with have asynchronous I/O enabled (and I'm not sure perl can use it even if it is - it's often a separate API). The OS cache should be acutely aware of the filesystem's fullness, even if it doesn't write immediately (which, during a flush, I think it does, but that's not material to my point). It should be able to return any errors immediately, as this cache is being shared among all processes working with that filesystem. Since the C library is flushing during a close, it will get that error immediately as well.

I'm not disagreeing with you - I think the above actually accentuates your point, not detracts from it. I'm saying that it's actually rarer than you said that this problem would come up.

  • Comment on Re^6: using lexically scoped variable as a filehandle

Replies are listed 'Best First'.
Re^7: using lexically scoped variable as a filehandle
by BrowserUk (Patriarch) on Feb 21, 2007 at 17:43 UTC

    Actually, this has nothing to do with Asyncronous I/O, at least as I understand that term. In my world, AsyncIO relates to whether I/O to files block the calling user mode thread until the kernel mode process completes, or not.

    For a better description see Synchronous & Asynchronous IO (MSDN).

    What I was referring to is termed File caching (MSDN), and is the default behaviour on Win32 systems. I seem to recall HP/UX and AIx systems did something similar also, but I could be wrong on that. This behaviour is external to processes and can only be defeated by explicit, low level requests to the OS at the time of opening the file. ( See the discussion associated with FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH at CreateFile (MSDN). Even then it imposes special conditions.

    Basically, most file IO on a win32 system, and certainly any done from within Perl without recourse to Win32::API/Win32API::File, goes through the system cache. This allows the OS to update the disk out of sequence to the application writes, and to read ahead of the application reads, thereby making IO more efficient by requiring less seeking.

    It is certainly possible for an application to re-write/overwrite an existing file block and for the process to terminate (marginally) before the dirty system cache block is flushed to disk. Whether this is also possible for data being appended to the end of an existing file I am unsure. The difference being that in the former case the cached data already has disk space reserved to back it. In the latter this is probably not the case.

    I guess the upshot is that for critical applications, is is necessary to check for and handle possible failures in all system calls. For non-critical applications, the need to check for the rare case of close failing is far less than the need to check for open failures and write failures.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.