Can you think of circumstances in which close will fail? I do :-)
If you say
open my $fh, '>>', $file or die $!;
you should also say
close $fh or die $!;
when done, IMHO.
--shmem
_($_=" "x(1<<5)."?\n".q·/)Oo. G°\ /
/\_¯/(q /
---------------------------- \__(m.====·.(_("always off the crowd"))."·
");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}
| [reply] [d/l] [select] |
Can you think of circumstances in which close will fail? I do :-)
If you say open my $fh, '>>', $file or die $!; you should also say close $fh or die $!; when done, IMHO
I'm gonna call you on this. Not because you are wrong, but because I haven't made up my about it mind yet.
Let's consider both scenarios when the close fails:
- The open append may fail for a variety of reasons. Existance, permissions, in-use etc.
Dying at this point in the proceedings may waste whatever work was done till now, but the (this) output file hasn't been touched, so recovery probably consists of correcting whatever caused the failure and re-running the program.
- For the close to fail, it basically come down to one reason, thought here could be several underlying causes, that some amount of written & cached, but unflushed output, could not be flushed to disk.
Recovery is all together more complicated. The file has almost certainly been modified, but not in a consistant manner, so the error definitely needs to be recorded. Unless the application makes provision to record the file length prior to writing to the file, there is no possibility of automated recovery, as there is no way to determine how much was not flushed.
But dying at this point achieves nothing accept to ensure that all subsequent processing is aborted, which will often compound the problem--by leaving other persistance state in a indeterminate place--rather than alleviating it.
</lo>
So, IMO, you almost never want to die on close failure, warn maybe, but not die.
Update: I also question the likelyhood of close actually failing, but that's probably a different discussion.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] [select] |
Update: I also question the likelyhood of close actually failing, but that's probably a different discussion
You've never tried writing to a full disk, or a mounted filesystem that went away mid-process then.
It's not likely for most people, but it does happen. I do agree that just 'die'ing on failure is a bad thing for these sorts of situations, though.
| [reply] |
So, IMO, you almost never want to die on close failure, warn maybe, but not die.
If a close fails, I surely want to die, as this is an unrecoverable error. Well,
I could just warn, but then? what next? sleep and wait for a signal, and later do
what?
If a close fails, most certainly I have data inconsistency - e.g. a corrupt file -, and
that's why I have to stop further processing immediately.
--shmem
_($_=" "x(1<<5)."?\n".q·/)Oo. G°\ /
/\_¯/(q /
---------------------------- \__(m.====·.(_("always off the crowd"))."·
");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}
| [reply] |
agreed, shmem, i wish more people would follow that advice. even though 99% of the time close() will not fail, it's always that 1% that ends up biting you in the ass :o
__________
Systems development is like banging your head against a wall...
It's usually very painful, but if you're persistent, you'll get through it.
| [reply] [d/l] |