Re^2: Sequences, last_insert_id, SQLite, and Oracle
by gaal (Parson) on Jun 14, 2005 at 10:56 UTC
|
This isn't safe for any kind of multi-user use. Say user A is in the middle of a transaction and asks for a sequence. The global id is incremented, but since A has not completed the transaction, he does not commit. Now B comes along and asks for a sequence. He sees the previous value of the global id, which is then incremented and given to him -- the same value A had got!
This can only be safe if the sequence-pulling is atomic, for example if it is done on separate database handles than the ones A and B use for the rest of their applications. | [reply] |
|
|
This is true, although you're assuming that the inserts occur within a transaction - and this isn't necessarily the case.
It was the best fix I could come up with for a system which had to include support for Foxpro 2 (don't ask!) - which had no auto-ids and no transactional support.
I must confess I assumed you weren't working on a multi-user environment, as looking at the SQLite docs I see it doesn't support multi-user updates/inserts on a single database file.
update: Plus, thinking about it, in the situation you describe the A transaction should lock the sequence table until it is complete, and the B transaction would have to wait until A completed and removed the lock.
| [reply] |
|
|
update: Plus, thinking about it, in the situation you describe the A transaction should lock the sequence table until it is complete, and the B transaction would have to wait until A completed and removed the lock.
Sequences are an oracle construct that are NOT tables. You cannot lock a sequence. This is, in fact, the whole difference between a sequence and a table that you just select from, increment, and then update*. If sequences did behave transactionally, it would lead to lots of deadlock problems and MASSIVE inter-session contention (which is why you'd get so many deadlocks).
* in truth, it would be possible to implement an oracle sequence using a table, but only via another oracle-specific construct called an "autonomous subtransaction". An autonomous subtransaction is a transaction which commits or rolls back independantly of the transaction which invoked it. The point is, you *must* commit the incrementing of the sequence, *even if* you roll back the insert. Otherwise only one database session at a time would be able to have an open transaction inserting into a given table.
------------
:Wq
Not an editor command: Wq
| [reply] [d/l] |
|
|
|
|
|
|
Preversely, if the database has no transactions then your technique is indeed safe. Likewise if you don't use the sequence inside a transaction -- but if you do have them and want to use them, odds are you'd like to be able to use the sequence with them :-)
I'm using SQLite for development and will possibly move to Oracle for production. SQLite is fine for applications that are not heavily multiuser: locking certainly exists, it's just not very finely-grained. (There are of course other limitations, like how putting the database on NFS is a bad idea, but for my purposes they don't present a problem.)
| [reply] |
|
|
|
|
Re^2: Sequences, last_insert_id, SQLite, and Oracle
by jplindstrom (Monsignor) on Jun 14, 2005 at 21:42 UTC
|
Like others have mentioned, having a select followed by an insert/update based on that value is a race condition waiting to happen.
One fairly cross-platform solution is to do "SELECT foo FROM bar FOR UPDATE" which places an exclusive/write lock on the row. This works for Oracle, MySQL, PostgreSQL, Sybase, and MS SQL Server, according to my experience + a quick Google.
Tragically it doesn't seem to work in SQLite :)
/J
| [reply] |
|
|
Also, "select ... for update" does nothing to deal with race-conditions on *insert* (which is the case here), only race-conditions around updates and deletes.
The only ways to prevent a race-condition on insert are either to obtain an exclusive lock on the table or by the creative use of a unique constraint.
Anyway, as I mentioned elsewhere in this thread, there *is* a way to deal with this issue in oracle, and it is the "returning ... into" clause (coupled with bidirectional binding). That is, to my knowledge, the best (and nearly the only) way of handling this.
If you *absolutely* needed to have a different way of doing it (not that I can think of why you would want another way... but hey, TIMTOWTDI, I guess), it would be possible to make use of Oracle's dbms_output channel, and have your on-insert trigger do a dbms_output.put(:new.id), and read from the dbms_output channel in your perl code. I speak from experience, though, and this method sucks (for some reason I searched and searched and didn't find the "returning id into" construct until after I had implemented it via dbms_output, and that was a mess. I was glad to go back and replace it with the right way).
------------
:Wq
Not an editor command: Wq
| [reply] [d/l] [select] |
|
|
"Also, "select ... for update" does nothing to deal with race-conditions on *insert* (which is the case here)"
Of course it does, the "select for update" followed by the upate was for getting a unique sequence number for the PK in the insert.
The next insert won't happen until the first transaction is comitted, since the second "select for update" will block until then (at least that's the locking semantics in Oracle).
This is also precicely why this is bad for concurrency.
Not only are the inserts serialized, the small sequence number table probably fits in a data block or two and is bound to incur a huge amount of block contention for _all_ tables that get their PK like this.
That's why it's a good idea to keep the database abstraction on a higher level, so that Oracle specific sequences can be used on Oracle, etc.
DBI is very useful to abstract away connectivity issues, but it would be nice to have a standard way to manage SQL dialect issues as well, like date formats, limit, PK generation and stuff like that.
/J
| [reply] |
|
|