in reply to best way to fast write to large number of files

Well, it may be too-much of a design alternative to consider, but this might be a fine application for an SQLite database file (or files) in which $filename is an indexed column.   This would, in effect, push the cataloging chore off of the filesystem, and onto the indexing capabilities of this (very high-performance) database engine.   In this case, I think it’s a possibility well worth considering.

Note: one vital caveat is that transactions must be used when writing to SQLite3 databases, since otherwise every disk-write is physically verified.

Replies are listed 'Best First'.
Re^2: best way to fast write to large number of files
by Anonymous Monk on Jun 24, 2014 at 21:07 UTC
    Note: one vital caveat is that transactions must be used when writing to SQLite3 databases, since otherwise every disk-write is physically verified.

    So you're saying if I don't use transactions, SQLite physically verifies every disk write, and if I use transactions, SQLite doesn't physically verify every disk write?

    When does SQLite ever "physically verify" a write? In case you're talking about fsync(2) (which only flushes writes to the disk), transactions are only indirectly involved. In fact, every change to an SQLite database happens in a transaction, whether explicitly or implicitly. fsync() behavior is actually controlled via PRAGMA synchronous and influenced by PRAGMA journal_mode (see e.g. WAL).