radiantmatrix has asked for the wisdom of the Perl Monks concerning the following question:

I have been asked by my current client to create a private, common base class for all of our internal classes to inherit from. It contains such things as autogeneration of accessors and mutators for child classes (so that the corporate conventions can be enforced), etc.

The subject of this question, though, relates to a particular feature. There is an error method that operates similarly to Class::Base's, except that it pushes errors onto an array when setting. When getting, the behavior depends on context -- in scalar context, it returns the last message on the "stack" via pop; in list context it returns a copy of the entire "stack". In either case, any messages returned are removed from the array.

In general, it's a good policy to deal with all of the errors that may have occured before destroying the object, so I've been pondering the following DESTROY method:

sub DESTROY { my $this = shift; my @err = $this->error; while (@err) { my $msg = shift @err; carp "$this destroyed with error: '$msg'" .(@err ? ' (and '.scalar @err.' more).' : '.'); } }

Essentially, this checks to see if there are any errors that haven't been processed, and generates warnings for each of them when an object instance is destroyed. No unprocessed errors, no warnings. The goal here is to gently prod developers into processing errors from objects before allowing them to be destroyed. Sample output from a test script (preceded by a Dumper of the object):

$test = bless( { '_ERR_' => [ 'First error', 'Hello there!' ] }, 'LocalTest2' ); LocalTest2=HASH(0x18633a4) destroyed with error: 'First error' (and 1 +more). at test.pl line 0 LocalTest2=HASH(0x18633a4) destroyed with error: 'Hello there!'. at te +st.pl line 0

The question this raises, wise monks, is simply whether or not this is a good idea. Are there better ways to deal with this? Should I be dealing with this at all? Is there anything about my approach that might cause troubles?

<radiant.matrix>
A collection of thoughts and links from the minds of geeks
The Code that can be seen is not the true Code
I haven't found a problem yet that can't be solved by a well-placed trebuchet

Replies are listed 'Best First'.
Re: Carping about object destruction when errors are unprocessed?
by Old_Gray_Bear (Bishop) on Sep 05, 2006 at 17:56 UTC
    If it were I, I'd make the Final Error Pass optional, and set it with either the 'debug' or 'verbose' option.

    I personally think that there should be no un-caught/un-corrected errors in anything that goes to production. If it was important enough to kvetch about, it is important enough to fix. But, I also realize that there are some anomalous conditions that are useful to know about, but do not require immediate programatic action to address.

    What you don't want to do is throw an error in production that, while it doesn't effect the final outcome of the calculation, does leave the User with a feeling that not everything is right with the world. Thus the optionality of the final error-pass.

    ----
    I Go Back to Sleep, Now.

    OGB

      Hm, making it optional (but default on) is a reasonable thought. Two points I wanted to make clear, to make sure we're really on the same page:

      1. The proposal is just to warn, not to halt -- not sure if that satisfies your concern your comment about "throw{ing} an error in production that, while it doesn't effect the final outcome of the calculation, does leave the User with a feeling that not everything is right with the world."
      2. As I just finished writing in another reply, these errors are of the non-fatal variety. Sometimes, it's important to know about it (to, say, inform program logic), but not something that needs a "fix", per se. Yes, die and friends could be used for this as well, but there are some acceptable reasons why the design team chose this path.

      That said, I want to reiterate that supplying a 'verbose' option to control the "carp on DESTROY" function might not be a bad idea (thanks!), though a consuming module could always just override DESTROY if the warnings were inappropriate...

      <radiant.matrix>
      A collection of thoughts and links from the minds of geeks
      The Code that can be seen is not the true Code
      I haven't found a problem yet that can't be solved by a well-placed trebuchet
Re: Carping about object destruction when errors are unprocessed?
by duckyd (Hermit) on Sep 05, 2006 at 17:19 UTC
    My first thought when looking at this is that you seem to be mixing "errors" and "warnings". To me, an error is something that generally needs to be handled right away (I.E. anything that dies).

    If you can really avoid handling these errors until object destruction, and even then you are OK "gently proding" developers into handling them by just printing the "error" as a warning of sorts (via carp), are they really errors at all? If they really must be handled, I would lean towards a more heavy handed approach. It's hard to ignore a die/croak/confess, but all too easy to ignore a warn/carp/cluck.

      you seem to be mixing "errors" and "warnings"

      Yes, I've mentioned that a couple of times during the design meetings, but I think we came to a reasonable understanding: there are "failures" (die/croak), there are "non-fatal errors" (error), and there are "warnings" (warn/carp).

      Examples:

      • failure: unable to connect to a required database. The software can't proceed, so it croaks an exception.
      • non-fatal error: user input failed to validate. For example:
        if ( $obj->validate($q->param('new_userid')) ) { create_new_user( $q->param('new_userid')) } else { # tell the user why it was invalid and ask again send_reprompt( "UserID was not valid for the following reasons:" $obj->error ); }
      • warning: couldn't connect to the primary database, had to fall back to the secondary.

      Things like the second case often generate several errors simultaneously, since (for example) a userID can be invalid in several ways -- a situation that the die class of functions doesn't handle elegantly. That's the purpose of the error list in this core module.

      Additionally, it provides a compromise to things like:

      eval { $obj->get_stuff_a() }; if ($@) { $obj->try_a_backup() } eval { $obj->get_stuff_b() }; if ($@) { $obj->try_b_backup() } eval { $obj->get_stuff_c() }; if ($@) { $obj->try_c_backup() }
      Allowing, instead:
      $obj->get_stuff_a() || $obj->try_a_backup || die $obj->error; $obj->get_stuff_b() || $obj->try_b_backup || die $obj->error; $obj->get_stuff_c() || $obj->try_c_backup || die $obj->error; my @err = $obj->error; if (@err) { warn 'Had to use one or more backup sources because:'."\t\n" .join("\t\n", @err)."\n"; }

      The latter is more useful for logging (and, IMO, more understandable when read).

      Often, our developers will approach cases like this by simply dying on the first error they encounter. While that's "correct" behavior, it frustrates users and other developers when they have to try an action repeatedly instead of finding out everything that's wrong in one swell foop {sic}.

      Given that further information, would you still advise me to croak instead of carp on DESTROY?

      <radiant.matrix>
      A collection of thoughts and links from the minds of geeks
      The Code that can be seen is not the true Code
      I haven't found a problem yet that can't be solved by a well-placed trebuchet