vinoth.ree has asked for the wisdom of the Perl Monks concerning the following question:

SQLite will be useful for small projects with less data and postgresql will be useful for big project with more data.

Which one is faster SQLite or Postgresql to insert rows ?

Update:

Title modified as off topic (OT)

Replies are listed 'Best First'.
Re: Which one is faster ?
by davorg (Chancellor) on Jul 13, 2009 at 07:54 UTC

    Did you have a Perl question?

    Given that SQLite and PostgreSQL are both available as free downloads, I'm not sure why you wouldn't just install them both and try them out.

    I strongly suspect that you'll find SQLite faster than PostgreSQL. But a lot depends on your individual project.

    --

    See the Copyright notice on my home node.

    Perl training courses

Re: Which one is faster ?
by marto (Cardinal) on Jul 13, 2009 at 08:04 UTC

    What do you have, a small project with 'less' data or a big project with 'more' data?

    Sounds like you are going to have to do some research, perhaps benchmarking both databases, though I'm sure there is more to consider than speed of inserts.

    Also please mark non Perl questions as OT, for off topic, in the title.

    Update: Fixed typo reasearh

    Martin

Re: OT : Which one is faster ?
by JavaFan (Canon) on Jul 13, 2009 at 11:54 UTC
    Which one is faster SQLite or Postgresql to insert rows ?
    Test it. But realize that "inserting rows" and "inserting rows" is different. Things that may matter
    • The size of the row.
    • The number of indices. If there are indices, where in the tree(s) the rows are inserted.
    • Are you using transactions? How many rows in a transaction?
    • How big is the database? How fast are your disks? Where's your log partition?
    • Is the table/are the indices "hot" or "cold" (loaded in memory, or still on disk)?
    • Are there any triggers involved? Constraints? Duplicate keys? Foreign key constraints?
    just to name a view.
Re: OT : Which one is faster ?
by afoken (Chancellor) on Jul 13, 2009 at 11:09 UTC

    Benchmark it under real conditions. There are a lot of conditions that affect the speed of the databases. Just a silly example:

    If I wanted to "prove" that Pg is slow as hell, I would run two tests: One on an in-memory table in SQLite on a fast machine with lots of RAM, and one on a permanent table in PgSQL over an ISDN connection to ancient hardware with as less RAM as possible and slow disks.

    If I wanted to "prove" that SQLite is slow as hell, I would reverse the conditions: A permanent table in SQLite, stored on an uncached floppy disk drive vs. a temporary table in PgSQL on the fattest server I could get, connected with at least gigabit ethernet (or via local loopback), and with PgSQL tuned as much as possible.

    Alexander

    --
    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
Re: Which one is faster ?
by psini (Deacon) on Jul 13, 2009 at 08:02 UTC

    SQLite is probably (much?) faster than pgSQL accessing small databases but, as far as I can tell, every committed transaction rewrite the entire db to disk, so performances in insert/update should rapidly decrease with larger db's.

    That said, SQLite for small projects and pgSQL for big ones is exactly my point of view.

    Rule One: "Do not act incautiously when confronting a little bald wrinkly smiling man."

      I don't think that is true. SQLite seems to divide its database file into pages. See these comments here:

      No. SQLite uses the B-Tree algorithm. Inserting a new value in the middle of a table involves rewriting 4 or 5 pages in the worst case. The usual case is to rewrite just the one page where the value is being inserted.

      and here in the SQLite optimization FAQ:

      An SQLite database is split into a btree of "pages" which are 1K in size by default

      One thing to speed up inserts seems to be using transactions to group a batch of writes

      While SQLite is probably better used for smaller projects, the definition of small is somewhat debatable. I assume that mysql and postgres have a lot more optimizations in place for complicated queries with joins etc. Also separating the database from the client means you can utilize at least a separate database server or through replication even more to split the load. But performance for simple sql queries on one machine seems to be comparable to the bigger engines (if you believe the SQLite makers). Also see this comment:

      I'd have to differ on opinion here. I have an sqlite database that's at 6.9GB with about 40 million records, and it's working just fine.

      every committed transaction rewrite the entire db to disk

      no, it doesn't!

Re: OT : Which one is faster ?
by Anonymous Monk on Jul 13, 2009 at 10:10 UTC
      Did you even read the document you linked to? It says
      The numbers here are old enough to be nearly meaningless. Until it is updated, use this document only as proof that SQLite is not a sluggard.

      in big red letters. So it's not really a good reference, is it?