in reply to Object Destructors and Signals

The reason I havent used transactions is that this perl process communicates the status of the object to the frontend website via the status column.

The one checking the status could check if the worker is still running.

Or you could use a time stamp that the worker periodically updates. If the timestamp hasn't been updated within a certain time frame, the worker is assumed to be deadlocked or dead.

Replies are listed 'Best First'.
Re^2: Object Destructors and Signals
by bot403 (Beadle) on Nov 05, 2010 at 18:35 UTC

    Thats reasonable. However, in my case the frontend and the perl backend are on different hosts. In fact, there are >12 perl workers on different hosts and 1 frontend on a remote webserver. Hence the communication via the DB.

    I do cleanup "stuck" entries after a while. After 24 hours I go and set their status to "Failed" or "Aborted" I'm just trying to catch the failure sooner rather than later.

      I'm just trying to catch the failure sooner rather than later.

      The more often the worker updates the "I'm alive" timestamp, the sooner you can catch failures. It sounds like it doesn't update the timestamp at all right now.

        Oh I see. I have a generic "This row was last updated @" timestamp. I dont have an "I'm Alive" timestamp.

        Currently, working with the object involves spawning an external process that might not return for 1 minute or 100. I'm not sure when I'd get the chance to update the "I'm alive" timestamp while that operation is happening unless I spawned threads.

        However, its a neat idea and is hitting on a more genreal approach of using some sort of watchdog process either external or internal.