in reply to (OT) should i limit number of files in a directory

You seem fairly convinced that a database is not the right answer here. I'll take your word for it.

You'll definitely need to go multilevel for that many entries, even if no (human) user ever enters or does an 'ls' on the directory. Opening a file in the directory still requires finding the file and doing four searches through a few thousand files each is going to be a hell of a lot faster than doing one search through a couple million unless the names are indexed specifically for searches in a way that filesystems generally don't do.

Also, since it hasn't been mentioned yet, and I realize you may have thought of this already, but... inodes. Unless you've specifically tuned your fs to have a higher-than-default inode density, it may not be able to support 3 million files regardless of how large or small those files may be or how they're organized. 'df -i' will tell you how many inodes the filesystem has. (Why, yes, I have had a print server grind to a halt, claiming the fs was full when 2/3 of the space was unused. How did you guess? CUPS had been forgetting to clean up after itself and consumed all available inodes with 0-byte files.)

  • Comment on Re: (OT) should i limit number of files in a directory

Replies are listed 'Best First'.
Re^2: (OT) should i limit number of files in a directory
by leocharre (Priest) on Sep 11, 2008 at 17:01 UTC

    Holy cow... I'm seeing things like .. 8 million inodes, 4 million.. that's not a lot..

    I'm starting to reconsider my db/fs stance. It *would* make a bunch of other stuff easier to use mysql- like, querying across the network. It seemed like a low class thing to do- storing all those text files in a db... hmm. I think I could keep them under 1 meg each.

    Eek.. If you'll excuse me.. I think I'm gonna go ask about the print server...