Hi,
First this is just a hypothesis. I am not a low level operating system IO programmer, so I do not really know. I really had issues with read, but on a MS platform. My code simply did not see new files, despite the fact I opendir() / close DIR for any loop the code does. The origin of the problem was effectively the IO system of MS Windows Server. I also experience similar read and write problem on my Linux laptop. In both case the sync command, respectively the MS equivalent solves the issue.
The sync command impacts also the read operations. Wen you issue the opendir() command your script receives a consistent snapshot of the disk+buffer state. This snapshot must remain constant during the time the script processes the directory content or the array of the directory content would vary in size.
In this context imagine millions of opendir(), close DIR operations. The burden of the operating system in order to provide your script with consistent buffers+disk snapshots becomes quite high. So what will usually works might fail in your case. By issuing sync commands the dirty IO buffers, meaning the ones changed and not visible yet in your snapshot are written to the disk. Accordingly the read operation is simplified because less dirty buffers are available and this might solve the issue.
Why PERL does not detect the problem, I do not know. In the cases I experienced I could not find any error message at any level. I suppose the operating system does not return an error code to PERL unless maybe put in a debugging mode.
If this is the origin of the problem, there is no solutions to it and no way to avoid it. The management of dirty buffers is a problem in all system. I know this from my database experience.
K
In reply to Re^6: Testing for readdir failure
by Zzenmonk
in thread Testing for readdir failure
by Bob Cook
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |