Re^4: using lexically scoped variable as a filehandle
by jhourcle (Prior) on Feb 21, 2007 at 15:40 UTC
|
Update: I also question the likelyhood of close actually failing, but that's probably a different discussion
You've never tried writing to a full disk, or a mounted filesystem that went away mid-process then.
It's not likely for most people, but it does happen. I do agree that just 'die'ing on failure is a bad thing for these sorts of situations, though.
| [reply] |
|
|
Well yes, but think about it. For the close to fail, rather than a preceding write, then you would have to be wanting to write just less than one buffer-full (512b/4kb/whatever), more than the filesystem could accomodate.
You program would have to write enough data so that it filled the last block on disk, and then just a little bit more before deciding to stop writing and close the file. A few bytes less and no error occurs because the file is successfully written. A few bytes more and you (should) never make it to the close because the write/print will have failed. That's a pretty small window of opportunity. And that's the simple case.
Most filesystems do caching outside of the auspices of the C-runtime cache, which means that you'd likely successfully close the file, flushing the last few bytes to OS cache, and the failure wouldn't occur until sometime after, possibly long after your process has terminated. In the meantime, some other process may delete or truncate a file, or the OS terminates a swapped out processes and frees a lump of disk space. A million things.
The mounted filesystem case is somewhat different, but again, the odds that the filesystem would go away at exactly the moment between your having successfully written the last buffer load (to cache), and deciding to close that file when the flushing of that last buffer would fail, are really very slim.
The really interesting question is what do you do about it when you detect this situation? At that point, unless you are lucky enough to be running on a system with multiple disks, even trying to log the failure is likely to encounter the same full disk scenario. You could also think about re-tries in the hope that some other process might have free'd some space up. Or you could trying deleting temp files etc. But in the end, if the scenario happens, there is unlikely to be a good recovery strategy.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |
|
|
I don't know about you, but very few of the systems I work with have asynchronous I/O enabled (and I'm not sure perl can use it even if it is - it's often a separate API). The OS cache should be acutely aware of the filesystem's fullness, even if it doesn't write immediately (which, during a flush, I think it does, but that's not material to my point). It should be able to return any errors immediately, as this cache is being shared among all processes working with that filesystem. Since the C library is flushing during a close, it will get that error immediately as well.
I'm not disagreeing with you - I think the above actually accentuates your point, not detracts from it. I'm saying that it's actually rarer than you said that this problem would come up.
| [reply] |
|
|
Re^4: using lexically scoped variable as a filehandle
by shmem (Chancellor) on Feb 21, 2007 at 17:09 UTC
|
So, IMO, you almost never want to die on close failure, warn maybe, but not die.
If a close fails, I surely want to die, as this is an unrecoverable error. Well,
I could just warn, but then? what next? sleep and wait for a signal, and later do
what?
If a close fails, most certainly I have data inconsistency - e.g. a corrupt file -, and
that's why I have to stop further processing immediately.
--shmem
_($_=" "x(1<<5)."?\n".q·/)Oo. G°\ /
/\_¯/(q /
---------------------------- \__(m.====·.(_("always off the crowd"))."·
");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}
| [reply] |
|
|
| [reply] [d/l] |
|
|
A question, more for curiosity than utility:
Let's say I use Fatal qw{ close }; which changes the behavior of close to throw an exception on failure instead of returning false. Let's also say the automatic (invisible) end-of-scope close failed. Would the exception still be thrown?
If yes, is this useful for improving the context-related situation?
| [reply] [d/l] [select] |
|
|
|
|
It obviously depends upon what else your program is doing
That's the point here - context :-)
It depends on the type of file, of it's importance, of whatever the whole program is about -
but if I want to die on a failed open (and not retry, close & retry, re-initialize handles, etc, whatever, or else), I almost certainly want to die on a failed close as well. But, that depends... my point is, if a close fails, something very unusual,weird and unforeseen is happening for which I don't have automatic recovery strategies, and I'd better not do anything beyond the fact.
--shmem
_($_=" "x(1<<5)."?\n".q·/)Oo. G°\ /
/\_¯/(q /
---------------------------- \__(m.====·.(_("always off the crowd"))."·
");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}
| [reply] |