sub foo {
eval {
my $m = shift;
die $m;
};
return;
}
sub bar {
my $m = shift;
$@ = $m . ' at ' . __FILE__ . ' line ' . __LINE__ . "\n";
return
}
foo("foo message");
print "foo $@";
bar("bar message");
print "bar $@";
It seems to me that whenever you use die() to throw exceptions expecting (or requiring) your users to use eval() to catch those exceptions you are, essentially, just setting $@ for them.
I don't see how it could hurt anything to set it explicitly as long as you document that your code uses it to return an error. That's not to say I'm a fan of this approach (I'm not), but I am suspicious of your claim that it would "make it much harder to program for evals."
And if you set $@ in the middle of your code, you lose the eval error!
But, if you want to rely on $@, you had better inspect it immediately after your eval() anyway. That is, unless you trust everyone else's code to always localize $@ and be free from compile time errors in their eval BLOCKs.
-sauoq
"My two cents aren't worth a dime.";
|