bot403 has asked for the wisdom of the Perl Monks concerning the following question:

I have an object which wraps some DB interaction with a destructor that sets a particular database column if the object's destructor was called while the object was in an "unclean" state. This is especially useful to know if I received a SIGTERM or somesuch while I was trying to manipulate the object. Specifically I update a status indicator from "Working" to "Aborted"

I'm having trouble getting the desctructor to work properly. Sometimes the database handle gets destroyed before my object's destructor is called and I get a cascade of errors.

Is there any good method to keep the database open somehow until all of my object's destructors are called? The database reference is currently a package scoped my variable. I tried putting a reference to it in every object instance, but it still went away at inconvenient times.

I suppose I could maintain a list of objects and walk the list calling destructors (or a cleanup method) when I receive a TERM signal. That seems unclean and probably wrong. It also won't catch error cases when my object was destroyed (i.e. went out of scope) in other cases due to programming errors.

Is there any way to accomplish this or am I asking too much of the perl interpreter at shutdown time?

Edit:Yea, the design does fail hard for now. I'm working towards redesiging the application. The reason I havent used transactions is that this perl process communicates the status of the object to the frontend website via the status column. Therefore I need to commit so the website sees the new data and the working state. If the perl process aborts I'd like to set the status to 'Aborted' or 'Failed'. Now it just stays in the 'Working' state.

Replies are listed 'Best First'.
Re: Object Destructors and Signals
by ikegami (Patriarch) on Nov 04, 2010 at 23:03 UTC

    I'm having trouble getting the desctructor to work properly. Sometimes the database handle gets destroyed before my object's destructor is called and I get a cascade of errors.

    Your object is a global variable or it is referenced by one. It is surviving past the end of all lexical scopes, forcing Perl to guess how to destroying what's left in memory. During global destruction, Objects can be destroyed in any order.

    The direct solution would be to not use a global variable, or clean up the global yourself sooner. END{} could also help.

    That said, the real reason you are having a problem is because your design fails hard. It needs to do something reliably under exceptional circumstances. Adjust your design so it fails safe, and you won't need to do anything during destruction. Perhaps transactions could be of use.

      Your object is a global variable or it is referenced by one. It is surviving past the end of all lexical scopes, forcing Perl to guess how to destroying what's left in memory. During global destruction, Objects can be destroyed in any order.

      I'd agree with all of that except for "forcing Perl to guess how to destroying what's left in memory". There is no "forcing" involved. It is more accurate to say "getting to the point where perl no longer cares in which order it destroys things".

      Global destruction is not done in an orderly fashion simply for the sake of expediency (with a name like "global destruction", what did you expect? ;). The lack of ordering is simply an optimization.

      - tye        

        oh but it does care. There's a number of factors it considers to determine the order in which things are freed. In fact, improvements were made there for 5.14. Maybe it just doesn't care enough.
Re: Object Destructors and Signals
by aquarium (Curate) on Nov 04, 2010 at 23:59 UTC
    i think it's only ever reasonable to use the DB provided transactions to make sure something is either written to the DB or aborted altogether, including a possible rollback (by the DB) of any table modifications before end of transaction is reached. even in the event of a power failure, DB provided transactions will either fully commit or roll back, upon next start of database. i think that's the best level of database integrity you can provide in an application.
    the hardest line to type correctly is: stty erase ^H
Re: Object Destructors and Signals
by locked_user sundialsvc4 (Abbot) on Nov 05, 2010 at 01:53 UTC

    I just don’t think that you can ever cause this particular architecture to be sufficiently reliable.

    If the Titanic is going down, then you might have a chance to snap off a distress-call ... but whoever records that urgent message in a database needs to be in the warm, dry, radio-room of the Carpathia.

    Obviously the most reliable approach, if you can manage it, would be to snag that TERM signal, so that your application can respond to it but do so under its own terms and in a manner of its own choosing.   The signal is treated as an unconditional command given by higher powers, “kill yourself as soon as possible... have a nice day.”   But the application does so in such a way that it can reliably and safely record the fact in a database.   It abandons what it is doing and accomplishes a known-good ROLLBACK.   It logs the abnormal-termination event and cleanly commits that transaction before cleanly closing the database handle before it finally, graciously, “gives up the gho”   ;-)

    If you are working in an environment such as Windows, or in certain Unix/Linux environments, you might have at your disposal an “event logging” facility which can serve as an adjunct or as an alternative to attempting to write to a production database under such circumstances.   Just get the distress-call off to the event-logger, who is always “warm and dry.” At system-shutdown time, it is expressly set aside to “be there until (almost) the bitter end,” for this very purpose.

Re: Object Destructors and Signals
by jeffa (Bishop) on Nov 04, 2010 at 22:39 UTC

    "Sometimes the database handle gets destroyed before my object's destructor is called and I get a cascade of errors."

    Why is that happening? I would look into why other "things" are allowed to destroy the database handle and restrict them from doing so. Perhaps you would benefit by giving the object it's own database handle as an attribute?

    Finally, perhaps DBIx::Connector would be of use to you ... check it out. Hope this helps. :)

    jeffa

    L-LL-L--L-LL-L--L-LL-L--
    -R--R-RR-R--R-RR-R--R-RR
    B--B--B--B--B--B--B--B--
    H---H---H---H---H---H---
    (the triplet paradiddle with high-hat)
    
Re: Object Destructors and Signals
by ikegami (Patriarch) on Nov 05, 2010 at 16:27 UTC

    The reason I havent used transactions is that this perl process communicates the status of the object to the frontend website via the status column.

    The one checking the status could check if the worker is still running.

    Or you could use a time stamp that the worker periodically updates. If the timestamp hasn't been updated within a certain time frame, the worker is assumed to be deadlocked or dead.

      Thats reasonable. However, in my case the frontend and the perl backend are on different hosts. In fact, there are >12 perl workers on different hosts and 1 frontend on a remote webserver. Hence the communication via the DB.

      I do cleanup "stuck" entries after a while. After 24 hours I go and set their status to "Failed" or "Aborted" I'm just trying to catch the failure sooner rather than later.

        I'm just trying to catch the failure sooner rather than later.

        The more often the worker updates the "I'm alive" timestamp, the sooner you can catch failures. It sounds like it doesn't update the timestamp at all right now.