in reply to filesystems & Perl
If readdir() gives you a filename, that filename may not even exist in the directory by the time you get around to using a file test on that filename.
Example: in your code while (my $SUB = readdir $DIR), by the time your code gets to the file tests on $SUB (if (-d $SUB) {}), that file may not even exist anymore!
When you call readdir(), it is traversing a data structure from the file system. In the general case, what happens if a new entry into that struct occurs while you are traversing that struct, is undefined. But what happens even if that "new" file is read by readdir?
What is the difference between readdir "just ended" and immediately after that (maybe even, one nanosecond), a new file appears? It could be that if you "miss" a file, that doesn't even matter because you will pick it up on the next run?
Since the directory is constantly changing, you are going to have to process it repeatedly. There will no "I'm finished, i.e. done", except you can say "for this instant, 'I am done'".
When processing a directory, I usually use readdir() in a list context. foreach my $file (grep{..condx..}readdir){} I usually just don't keep traversing the readdir() structure while deleting files. I take a "snapshot" and then process all files in the snapshot, realizing that the "snapshot" is not perfect.
There was some mention of rewinddir(). The sequence of close() and opendir() will do the same albeit much slower. The idea is to restart the readdir() structure traversal anew.
It would be helpful if you could explain the problem symptoms that you are having with your code. Also. I am curious about these files that "come and go". There is a difference between deleting a file and creating a brand new one vs re-opening a filename for re-write. If there is more than one process mucking with the files in the directory, (adding or deleting), then how are they coordinating their actions?
|
|---|