in reply to Re: Design flat files database
in thread Design flat files database

You could search a few thousand bytes of directory data, even using linear search, in far less time than it would take you to access that data.

Sorry, whilst I'm no expert on *nix filesystems, I think you are wrong. At least as far as ext2 goes; and ext3 used in its default manner.

The problem is that in the former, and by default in the latter, the filename to inode mappings stored in the directory files are a single linked list that must be searched from the top each time.

Directories

Each directory is a list of directory entries. Each directory entry associates one file name with one inode number, and consists of the inode number, the length of the file name, and the actual text of the file name. To find a file, the directory is searched front-to-back for the associated filename. For reasonable directory sizes, this is fine. But for huge large directories this is inefficient, and ext3 offers a second way of storing directories that is more efficient than just a list of filenames.

So, not only does that mean that in a 1 million file directory, that you need to inspect 500,000 files on average each time, it also means that the VFS is unlikely to be able to retain the entire directory file in cache, which means frequent re-reads.

Ext3 has a mechanism (Htree) for improving this:Creating 100,000 files in a single directory took 38 minutes without directory indexing... and 11 seconds with the directory indexing turned on.. The trouble is, very few people use it.

A few years ago, (from memory, on BSD), moving ~100 million files from a single directory to a three level hierarchy improved the time taken to locate and read small (a few kb) files from whole seconds to 10 of milliseconds. Try it yourself to see the difference it makes.

By reducing the size of the directories to 100 or 256, the entire directory at any level can fit into a single block. The root level effectively gets locked in cache making the first reduction happen in microseconds. And for an application like the OPs where most accesses will be for the latest message IDs, the one or two second level directories that will be most accessed will also tend to remain cache resident. So in use, most accesses will not need to hit the disk at all until it comes to reading or writing the top level file.

The benefits are far less when there is no locality of reference -- ie. the files are accessed randomly -- but for the OPs application, they should be tangilble and very worthwhile.

I also agree that going too deep negates the benefits.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^3: Design flat files database
by jpl (Monk) on Jul 15, 2011 at 16:43 UTC
    You could search a few thousand bytes of directory data, even using linear search, in far less time than it would take you to access that data.
    Sorry, whilst I'm no expert on *nix filesystems, I think you are wrong.
    Well, I did say a few thousand bytes of directory data, which isn't going to apply to 100,000 files, 1 million files or 100 million files. I was mostly trying to move the OP away from the directory-per-digit option. If the IDs are used to cross-reference messages (as they are for messages here in the monastery), then a database, rather than flat files, is even more compelling. I don't know how messages are stored in the monastery, but I strongly suspect it is via a database, not in flat files. You wouldn't want to run monastic searches against unindexed flat files, but that would be relatively easy to implement (efficiently) in most databases.

    I have recently been trying to nudge the OP in the direction of databases, and that's a nudge I see reflected in many of the responses.

      I have recently been trying to nudge the OP in the direction of databases, and that's a nudge I see reflected in many of the responses.

      Indeed. I asked a similar question.

      Why are you settled upon a "flat file database" rather than one of the other options? (RDBMS, HADOOP, NoSQL etc.)

      That said, RDBMSs are pretty shite at handling hierarchal datasets, whereas file-systems are explicitly designed and tuned for exactly that. It would be an interesting exercise to compare the response times for the two using identical, threaded datasets. But then again, neither scale well.

      Facebook apparently use hundreds of sharded MySQL instances ensconced behind 1000s of memcache instances with more (PHP!?!) caching in front of that. They seem to make it work, but it sounds like a disaster waiting to happen to me. But we can probably assume that the OP isn't likely to be requiring that scale of things anytime soon.

      One nice thing about using the file-system is that it is relatively easy to scale it out across multiple boxes, by partitioning the ID space to pretty much whatever level is required. Raided disks in each box take care of your hardware redundancy and each box trickles off updates in the background to remote off-line storage. Far easier to partition and manage than distributed RDBMSs and no coherency problems.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.