in reply to Re^2: Why use references?
in thread Why use references?

what about arrays and hashes prevents you from returning undef on failure?

When assigning the result of a function call to an array or reference, the function's return value is evaluated in list context. This means that

@a = foo(); sub foo { return(undef); }
makes @a contain a single undef value. How do you know that's supposed to be an error, rather than a non-error single undef?

Furthermore, in

%h = foo(); sub foo { return(undef); }
you get an Odd number of elements in hash assignment warning. Even if you fix/ignore that, how do you know the sub didn't mean to return a (non-error) ( "" => undef ) list?

If one chooses to use a return value of undef to signify errors, it makes sense to return references for non-error conditions — even for scalars.

Personally, I try to let any false value indicate failure. This would include undef/''/0 for scalar returns, and empty lists for arrays/hashes. This hash the "advantage" of being consistent with many of the built-in functions.

But when I need more power (which is usually), I throw exceptions for errors and make the assumption that a function call returning means success.

eval { my @a = foo(); # if I'm here, then foo() succeeded. at least, it didn't throw an +error. };

Another thing you sometimes see people do is return data via OUT parameters, and let the function return value only indicate success/failure.

We're building the house of the future together.