in reply to is it possible to have too many files

It used to be that Unix file systems almost always implemented directories as just a linear array of entries. In such a system, having 10,000 files in a (single) directory could easily make a simple "ls" take many minutes to run and make even stat of a single file take quite a long time.

If you deployed a Unix system today, it is very likely that you'd get file systems that implement directories that include a tree-based index so that finding an entry for a named file is O(log $N) when there are $N files in the directory rather than O($N). So having 10,000 files in a directory would have roughly (some small multiple of) the performance of having 13 files in an old-style directory.

So, if you have a file system that was created two decades ago (and not recreated since), then you should be very worried about putting 10,000 files into a single directory. If you have a more modern file system, then it is likely that 10,000 files in a single directory is not a huge problem. Even if you are sure that you have a modern file system, you should still test the performance impact of your 10,000-file solution before committing to it.

- tye        

  • Comment on Re: is it possible to have too many files (history)