in reply to readline succeeds but sets $! = EBADF

As a rule of thumb in Unix: You are safe if you check $! (i.e. errno) only after a call actually fails. You can't determine failure from $! alone.

Unless the documentation specifically says how it handles $!, you should think of it as the last thing that failed, somewhere buried deep down in the code you called.

Update:I spoke to soon wrt docs. I noticed that the readline docs do say they set $! on an error, but they don't directly mention what they return. In an example it seems to imply that the return value is undef in that case.

  • Comment on Re: readline succeeds but sets $! = EBADF

Replies are listed 'Best First'.
Re^2: readline succeeds but sets $! = EBADF
by sgifford (Prior) on Aug 30, 2004 at 19:41 UTC
    Just to strengthen what bluto says, here's the documentation for $! from perlvar(1):
    $! If used numerically, yields the current value of the C "errno" variable, or in other words, if a system or library call fails, it sets this variable. This means that the value of $! is meaningful only immediately after a failure: if (open(FH, $filename)) { # Here $! is meaningless. ... } else { # ONLY here is $! meaningful. ... # Already here $! might be meaningless. } # Since here we might have either success or failure, # here $! is meaningless. In the above meaningless stands for anything: zero, non-zero, "undef". A successful system or library call does not set the variable to zero. If used an a string, yields the corresponding system error string. You can assign a number to $! to set errno if, for instance, you want "$!" to return the string for error n, or you want to set the exit value for the die() operator. (Mnemonic: What just went bang?) Also see "Error Indicators".
Re^2: readline succeeds but sets $! = EBADF
by ambrus (Abbot) on Aug 30, 2004 at 19:45 UTC

    My problem is than the following. Suppose readline returns undef. That can mean two things, either an error or eof. Now how can I distinguish between the two cases. Perldoc perlfunc says it is enough to clear $! before the call, and check it after it. But if readline can accidentally set $! when it succeeds, isn't it possible that it accidentally sets $! when it means to say it's reached eof?

    Even if I can count on $! showing the error after readline returns undef, I have to use

    $! = 0; $line = readline $file; !defined($line) && $! and die "error readline: $!";
    It would be much simpler to write
    $! = 0 $line = readline $file; $! and die "error readline: $!";
    but if what's happened to me is normal, I can't do that.

    Note that with libc, there is at least one function where you must use errno to see if there's an error, but it is enough to check errno there, you don't have to check the return value too. From (libc)Parsing of Integers:

    - Function: long int strtol (const char *restrict STRING, char **restrict TAILPTR, int BASE)

    [...]

    You should not check for errors by examining the return value of `strtol', because the string might be a valid representation of `0l', `LONG_MAX', or `LONG_MIN'. Instead, check whether TAILPTR points to what you expect after the number (e.g. `'\0'' if the string should end after the number). You also need to clear ERRNO before the call and check it afterward, in case there was overflow.

    The documentation of gnu libc is not very specific in that sense. From (libc)Checking for Errors:

    The initial value of `errno' at program startup is zero. Many library functions are guaranteed to set it to certain nonzero values when they encounter certain kinds of errors. These error conditions are listed for each function. These functions do not change `errno' when they succeed; thus, the value of `errno' after a successful call is not necessarily zero, and you should not use `errno' to determine _whether_ a call failed. The proper way to do that is documented for each function. _If_ the call failed, you can examine `errno'.

    Many library functions can set `errno' to a nonzero value as a result of calling other library functions which might fail. You should assume that any library function might alter `errno' when the function returns an error.

    Does this mean that a function can change errno even if it succeeds? Of course, one can't state any more specific of such a large library as libc, and after all, perl is not libc so perl might behaive differently. Perlvar does not write anything more specific of $! either.
      Two thoughts.

      First, why not use eof to check for an end-of-file condition?

      Second, a common way to handle this situation is to close the filehandle after readline returns undef, and see if the close fails. If it does, there was some kind of error handling the file.

        No, that's not true. Close returns an error only if an IO error (including broken network in case of nfs) has occured after the last write so the kernel couldn't report it sooner.

        From close(2):

        Not checking the return value of close is a common but nevertheless serious pro- gramming error. It is quite possible that errors on a previous write(2) operation are first reported at the final close. Not checking the return value when closing the file may lead to silent loss of data. This can especially be observed with NFS and disk quotas.

      Does this mean that a function can change errno even if it succeeds?

      Unfortunately, the general answer is yes. Some systems seem to handle errno better on success (Solaris?), but they are probably the exception and even then I wouldn't trust them 100%. There is a long history behind errno, most of it caused by poor initial "design" (if you can call it that), and it's definitely not going to change soon. If you check errno, you are signing a contract saying you have read the docs on the _specific_ call you are making. If it doesn't mention preservation of errno on success, chances are good that it probably doesn't.

      You are right -- if you want to _really_ check for errors, in general you must jump through hoops with additional ugly code.