in reply to (OT) should i limit number of files in a directory
You'll definitely need to go multilevel for that many entries, even if no (human) user ever enters or does an 'ls' on the directory. Opening a file in the directory still requires finding the file and doing four searches through a few thousand files each is going to be a hell of a lot faster than doing one search through a couple million unless the names are indexed specifically for searches in a way that filesystems generally don't do.
Also, since it hasn't been mentioned yet, and I realize you may have thought of this already, but... inodes. Unless you've specifically tuned your fs to have a higher-than-default inode density, it may not be able to support 3 million files regardless of how large or small those files may be or how they're organized. 'df -i' will tell you how many inodes the filesystem has. (Why, yes, I have had a print server grind to a halt, claiming the fs was full when 2/3 of the space was unused. How did you guess? CUPS had been forgetting to clean up after itself and consumed all available inodes with 0-byte files.)
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: (OT) should i limit number of files in a directory
by leocharre (Priest) on Sep 11, 2008 at 17:01 UTC |