If, as it appears, you have control over the process that writes the files in the first place, there is a much easier way around this problem, that requires no locking or other fiddly stuff.
What you do is that you create the file(s), and name them in an agreed-upon way. Maybe numbers will do: 0001, 0002, 0003 and so on.
You can take all the time in the world you want to write this file. The thing is, you don't actually look directly for these files in your second phase opendir/readdir loop. Instead, what you do, after you have written the 0001 file and closed it, you then create an empty stub file, like this:
my $file = '0001';
open OUT, "> $file.ok" or die cannot open $file.ok for output: $!\n"
+;
close OUT;
I.e., create a zero-length file in the directory. What you do then, is look for files with an extension of .ok, and when you encounter one, you know that all you have to do is strip of extension to recover the name of the data file. You then process the datafile, and when you're done, you delete both the data file and the .ok file, and you're done.
This is about as simple and foolproof as it gets. You just have to ask yourself how do you recover from the first stage crashing in the middle of writing the data file, before it gets around to writing the ok file. Also, if the second stage crashes before deleting both files, is it a problem to process the file a second time.
I have used this technique in the past and have submitted it to pretty brutal loads without encountering any race conditions. If there are any, I'd like to be corrected.
The monk with no name speaks wisely. I don't know why I never managed to think of that particular optimisation of the problem. I suppose because it worked well the first time I came up with the technique, and never really bothered to look at it again with a critical eye.
I ♥ Perl Monks.
print@_{sort keys %_},$/if%_=split//,'= & *a?b:e\f/h^h!j+n,o@o;r$s-t%t#u' |