in reply to Re^2: Bad file descriptor when trying to close file handle
in thread Bad file descriptor when trying to close file handle

I tested it out, you are right, the only way to reproduce this was closing the same FH twice. (at least I think, Ubuntu is giving me German error messages)°

> to which one can continue writing.

well kind of, printing to an unlinked file in ">>" mode seems to do nothing, but no error message either.

(Tho I only did a quick test in the debugger...)

Cheers Rolf
(addicted to the Perl Programming Language :)
Wikisyntax for the Monastery

°) it's unfortunate we can't add these kind of errors to perldiag

  • Comment on Re^3: Bad file descriptor when trying to close file handle

Replies are listed 'Best First'.
Re^4: Bad file descriptor when trying to close file handle
by ikegami (Patriarch) on Mar 03, 2022 at 15:16 UTC

    well kind of, printing to an unlinked file in ">>" mode seems to do nothing, but no error message either.

    It might seem to you that it does nothing, but that's not the case. It does what I said it does: It writes to the file as it always does.

    Open the file twice, once for reading and once for writing. Then delete the directory entry you used to open the file, then write to it, then read from it. You'll see it's there.

      A bit more on this.

      Files exists independently of directory entries in unix. Think of them as being referenced-counted. They exist as long as they are referenced by a directory entry or by a file handle.

      This means you can have "anonymous files". perl -i uses this (which is why it doesn't work on Windows without an extension). File::Temp creates these by default.

      ls -l even shows how many times a file is referenced by a directory entry (second column). For plain files, it's usually one. For directories, it should be more than one.

      $ ls -ld . .. .bash* bin usr drwxr-xr-x 10 ikegami ikegami 4096 Feb 19 23:53 ./ drwxr-xr-x 3 root root 4096 Dec 10 21:14 ../ -rw------- 1 ikegami ikegami 163 Jan 3 02:07 .bash_logout -rw------- 1 ikegami ikegami 2318 Jan 3 02:07 .bashrc -rw------- 1 ikegami ikegami 0 Jan 3 02:07 .bashrc_extra drwx------ 2 ikegami ikegami 4096 Feb 14 12:13 bin/ drwx------ 3 ikegami ikegami 4096 Jan 24 15:39 usr/

      See how .. is referenced by three directory entries? Those are /home, /home/. and /home/ikegami/..

      See how . is referenced by ten directory entries? Those are /home/ikegami, /home/ikegami/., /home/ikegami/bin/.., /home/ikegami/usr/.., etc.

      not sure what you mean, I globed the directory after writing to an unlinked file and it was gone.

      Probably that file and inode still exist without association to a dir, but how would I recreate it into the dir?

      Cheers Rolf
      (addicted to the Perl Programming Language :)
      Wikisyntax for the Monastery

        The directory entry is gone, yes. Writing to a file doesn't create directory entries.

        I didn't mention glob at all. Did you try the steps I described? The file is only gone once no directory and no file handles reference it. Also, see the post I just made in parallel to yours.

        $ perl -Mv5.10 -e' open my $o_fh, ">", "a" or die $!; open my $i_fh, "<", "a" or die $!; unlink "a" or die $!; print $o_fh "abc\n" or die $!; $o_fh->flush(); chomp( my $line = <$i_fh> ); say "<$line>"; ' <abc>