in reply to OT : Which one is faster ?

SQLite is probably (much?) faster than pgSQL accessing small databases but, as far as I can tell, every committed transaction rewrite the entire db to disk, so performances in insert/update should rapidly decrease with larger db's.

That said, SQLite for small projects and pgSQL for big ones is exactly my point of view.

Rule One: "Do not act incautiously when confronting a little bald wrinkly smiling man."

Replies are listed 'Best First'.
Re^2: Which one is faster ?
by jethro (Monsignor) on Jul 13, 2009 at 09:55 UTC

    I don't think that is true. SQLite seems to divide its database file into pages. See these comments here:

    No. SQLite uses the B-Tree algorithm. Inserting a new value in the middle of a table involves rewriting 4 or 5 pages in the worst case. The usual case is to rewrite just the one page where the value is being inserted.

    and here in the SQLite optimization FAQ:

    An SQLite database is split into a btree of "pages" which are 1K in size by default

    One thing to speed up inserts seems to be using transactions to group a batch of writes

    While SQLite is probably better used for smaller projects, the definition of small is somewhat debatable. I assume that mysql and postgres have a lot more optimizations in place for complicated queries with joins etc. Also separating the database from the client means you can utilize at least a separate database server or through replication even more to split the load. But performance for simple sql queries on one machine seems to be comparable to the bigger engines (if you believe the SQLite makers). Also see this comment:

    I'd have to differ on opinion here. I have an sqlite database that's at 6.9GB with about 40 million records, and it's working just fine.

Re^2: Which one is faster ?
by salva (Canon) on Jul 13, 2009 at 09:12 UTC
    every committed transaction rewrite the entire db to disk

    no, it doesn't!