in reply to Re: readline succeeds but sets $! = EBADF
in thread readline succeeds but sets $! = EBADF

My problem is than the following. Suppose readline returns undef. That can mean two things, either an error or eof. Now how can I distinguish between the two cases. Perldoc perlfunc says it is enough to clear $! before the call, and check it after it. But if readline can accidentally set $! when it succeeds, isn't it possible that it accidentally sets $! when it means to say it's reached eof?

Even if I can count on $! showing the error after readline returns undef, I have to use

$! = 0; $line = readline $file; !defined($line) && $! and die "error readline: $!";
It would be much simpler to write
$! = 0 $line = readline $file; $! and die "error readline: $!";
but if what's happened to me is normal, I can't do that.

Note that with libc, there is at least one function where you must use errno to see if there's an error, but it is enough to check errno there, you don't have to check the return value too. From (libc)Parsing of Integers:

- Function: long int strtol (const char *restrict STRING, char **restrict TAILPTR, int BASE)

[...]

You should not check for errors by examining the return value of `strtol', because the string might be a valid representation of `0l', `LONG_MAX', or `LONG_MIN'. Instead, check whether TAILPTR points to what you expect after the number (e.g. `'\0'' if the string should end after the number). You also need to clear ERRNO before the call and check it afterward, in case there was overflow.

The documentation of gnu libc is not very specific in that sense. From (libc)Checking for Errors:

The initial value of `errno' at program startup is zero. Many library functions are guaranteed to set it to certain nonzero values when they encounter certain kinds of errors. These error conditions are listed for each function. These functions do not change `errno' when they succeed; thus, the value of `errno' after a successful call is not necessarily zero, and you should not use `errno' to determine _whether_ a call failed. The proper way to do that is documented for each function. _If_ the call failed, you can examine `errno'.

Many library functions can set `errno' to a nonzero value as a result of calling other library functions which might fail. You should assume that any library function might alter `errno' when the function returns an error.

Does this mean that a function can change errno even if it succeeds? Of course, one can't state any more specific of such a large library as libc, and after all, perl is not libc so perl might behaive differently. Perlvar does not write anything more specific of $! either.

Replies are listed 'Best First'.
Re^3: readline succeeds but sets $! = EBADF
by bluto (Curate) on Aug 30, 2004 at 20:20 UTC
    Does this mean that a function can change errno even if it succeeds?

    Unfortunately, the general answer is yes. Some systems seem to handle errno better on success (Solaris?), but they are probably the exception and even then I wouldn't trust them 100%. There is a long history behind errno, most of it caused by poor initial "design" (if you can call it that), and it's definitely not going to change soon. If you check errno, you are signing a contract saying you have read the docs on the _specific_ call you are making. If it doesn't mention preservation of errno on success, chances are good that it probably doesn't.

    You are right -- if you want to _really_ check for errors, in general you must jump through hoops with additional ugly code.

Re^3: readline succeeds but sets $! = EBADF
by sgifford (Prior) on Aug 30, 2004 at 20:58 UTC
    Two thoughts.

    First, why not use eof to check for an end-of-file condition?

    Second, a common way to handle this situation is to close the filehandle after readline returns undef, and see if the close fails. If it does, there was some kind of error handling the file.

      No, that's not true. Close returns an error only if an IO error (including broken network in case of nfs) has occured after the last write so the kernel couldn't report it sooner.

      From close(2):

      Not checking the return value of close is a common but nevertheless serious pro- gramming error. It is quite possible that errors on a previous write(2) operation are first reported at the final close. Not checking the return value when closing the file may lead to silent loss of data. This can especially be observed with NFS and disk quotas.