in reply to Database queue logic

Perhaps a MUTEX might be of use.. so that only one process is performing a SELECT at a time?

Once you have the object you can DELETE at leisure, so wrapping a MUTEX around the SELECT would ensure the safety you're looking for (look around for documentation on inter-process communications, or the chapter in the Camel book).

Update: I'm being downvoted on this one.. it seems my experience in multitasking embedded and workstation environments is causing consternation for the readers of this thread...

Update 2: As Foxcub correctly points out below, it is futile to protect the SELECT using a MUTEX but not wrapping the DELETE in the same call. Thus I am completely wrong, my logic is entirely flawed. Apologies.

Update 3: Added readmore tags.

Replies are listed 'Best First'.
Re^2: Database queue logic
by Marcello (Hermit) on Jun 14, 2005 at 12:14 UTC
    This was an idea that crossed my mind, but it causes an overhead because processes have to wait for the MUTEX to be released and cannot process records simultaneous.

    Marcello

      Update 2: as pointed out by the reply to this post by Foxcub I was wrong to ignore the fact that the DELETE also needs to occur within the atomic action that contains the SELECT. Apologies.

      Update 3: added readmore tags.

        "Reading a record from a database (SELECT) is likely to take a fraction of the time that writing a record to the database will take (DELETE). Hence, the odds of both your processes attempting to SELECT at the same time are low."

        Surely the time taken for a select or delete is largely irrelevant here?

        The OP indicates that the workflow for the current application is to select a number of rows, process them, and finally delete the rows from the database.

        To make use of a mutex around the select in an attempt to multi-thread or multi-process this operation would seem futile unless the database is also updated within the mutex to indicate that a set of rows is already being processed.

        This update allows following processes to pick up the next batch of rows for processing. Without performing that update, it's be necessary to lock around the entire transaction (from the initial select until after the delete has occurred) - meaning that the solution would be no faster than a simple single process operation.

        If it was your intention to indicate that, your comment about the time taken for a select and a delete is irrelevant and misleading. If not, I don't like your thinking: it's more than a little flawed.

        Without that locking or update it'd be very likely for every row to be processed more than once, which may or may not be an issue depending on the situation. More important, though, is the fact that repeated processing of the same data is both useless and potentially time consuming, largely negating the benefits of using a multi-process application architecture in the first place.