blockcipher has asked for the wisdom of the Perl Monks concerning the following question:

Greetings fellow monks.

We are currently attempting to write some Perl scripts to do some automated processing and we need them to be able to communicate between each other. We'd like to use something like SystemV IPC message queues, however there is no guarantee that the system we'll be deploying on will let us do that.

My question is, can multiple scripts access the same db_file/SQLite/similar database? I did some poking around, but didn't find anything.

Any info would be great. Thanks in advance!

  • Comment on Multiple Clients for db_file, sqlite, similar...

Replies are listed 'Best First'.
Re: Multiple Clients for db_file, sqlite, similar...
by Fletch (Bishop) on Sep 05, 2006 at 20:48 UTC

    SQLite (see last sentence under the "Serverless" heading) and BerkeleyDB should both allow concurrent access to the same store, however in the later case I think you do need to do some extra setup handwaving to ensure that locking is enabled (and I'm sure someone will chime in with a pointer to docs shortly).

    Update: See also "Appropriate Uses For SQLite", which mentions that locking is done at the database file level (so only one reader will block out any number of writers, and likewise a single writer prevents any readers).

Re: Multiple Clients for db_file, sqlite, similar...
by bart (Canon) on Sep 06, 2006 at 06:33 UTC
    Yes, you can use SQLite from multiple clients simultaneously. But, in my experience, you should take some caution.

    When using autocommit (the default), I found that I could not write to the database too often. Granted, the hard disk was connected to the PC via a slowish USB cable, but after a few minutes of adding new records to the database, I found the hardware couldn't keep up with writing to the database. After a few minutes, I got a Windows dialog box telling me the hard disk hardware wasn't responding. And that was while adding one record every few seconds. Apparently the amount of disk (re)writes is quite huge, compared to the amount of data written.

    When I switched to using transactions, manually committing every hundred of new records, I found that you cannot write to the database using another client while another transaction is in progress. Other clients have to wait until it's over, and since my transaction took a few minutes (naive approach), I got timeouts.

    You can upper the timeout time (you can change how long a client can wait, the default is a few seconds), but fast, it definitely isn't.

    So, the best approach for me for this application was to store the new records in an array of hashes, and flush the array in a short burst transaction to the database every few minutes. Of course, in such a case, the risk of clashes (duplicate records) when writing from multiple clients gets big, because you can't really rely on the database to weed them out when you're generating the records.

    So no, not all is well when writing to SQLite from multiple clients. Granted, the fact that my database was connected over a slow USB connection must have made things a lot worse than they could have been with an internal harddisk, but that just is lowering the scale. I'm sure that SQLite really isn't up to the task of heavy updating from many clients, and do it fast.

      Thanks for the responses. This seems to be a big problem because I can easily lock up both of my processes. One stops dead and the other can't do the deletes it needs to do.

      Anyone have any ideas on how I can prevent the lock contention that's going on?