rovf has asked for the wisdom of the Perl Monks concerning the following question:

The do FILENAME construct does not work if the last line in the file does not end in a newline. Bug or feature? For example, the program

use strict; use warnings; my $do_me="do_me.pl"; foreach my $ending ('',"\n") { print "Create file with",($ending?'':'out')," newline\n"; open(OUT,'>',$do_me) or die "$!"; print OUT "q(string)$ending"; close OUT; my $result=do $do_me; print "Can not read $do_me: $!\n" if $!; print "Can not evaluate $do_me: $@\n" if $@; print("$result\n") if defined $result; }
produces as output
Create file without newline Can not read do_me.pl: Bad file descriptor string Create file with newline string

-- 
Ronald Fischer <ynnor@mm.st>

Replies are listed 'Best First'.
Re: Why do we need a \n with do FILENAME?
by kennethk (Abbot) on Jul 06, 2010 at 15:34 UTC
    As described in $! and Error Indicators in perlvar, "A successful system or library call does not set the variable [$!] to zero." This means you cannot test $! and expect to get a meaningful result. Note that despite your reported error, your code executes the script with no problem. Consider:

    #!/usr/bin/perl use strict; use warnings; my $do_me="do_me.pl"; print "(1)Can not read $do_me: $!\n" if $!; foreach my $ending ("\n", '') { print "Create file with",($ending?'':'out')," newline\n"; open(OUT,'>',$do_me) or die "$!"; print "(2)Can not read $do_me: $!\n" if $!; print OUT "q(string)$ending"; close OUT; my $result=do $do_me; print "(3)Can not read $do_me: $!\n" if $!; print "Can not evaluate $do_me: $@\n" if $@; print("$result\n") if defined $result; } print "(4)Can not read $do_me: $!\n" if $!;

    outputs:

    (1)Can not read do_me.pl: Bad file descriptor Create file with newline (2)Can not read do_me.pl: Bad file descriptor string Create file without newline (3)Can not read do_me.pl: Bad file descriptor string (4)Can not read do_me.pl: Bad file descriptor

    As to why you get that pattern, my guess is internal try-catch structures within the open and do blocks.

      A successful system or library call does not set the variable $! to zero.
      I am aware of this, but in my case, this is not the explanation for the problem, though I see now that I should have made it clear from the beginning. Consider the modified code:

      use strict; use warnings; my $do_me="do_me.pl"; foreach my $ending ('',"\n") { print "Create file with",($ending?'':'out')," newline\n"; open(OUT,'>',$do_me) or die "$!"; print OUT "q(string)$ending"; close OUT; $!=0; my $result=do $do_me; print "Can not read $do_me: $!\n" if $!; print "Can not evaluate $do_me: $@\n" if $@; print("$result\n") if defined $result; }
      The difference to the original code is that I now reset $! explicitly before doing the file. However, I still get the error. Since $! isn't set inside the file-to-be-evaluated, it must have been set by Perl while processing the file.

      If we are picky and take the documentation, which says

      If "do" cannot read the file, it returns undef and sets $! to the error. If "do" can read the file but cannot compile it, it returns undef and sets an error message in $@. If the file is successfully compiled, "do" returns the value of the last expression evaluated.
      , at face value, we can of course argument like this: Since our do does not return undef, we can safely ignore $!. Hence, this is not a bug.

      However, this argument would sound a bit nitpicking to my ears, for the following reason: Imagine that we are doing a file which (legally) returns undef, which might be reasonable if the code in the file is executed only for its side effects, not for its return value. In this case, there is no way to decide whether the file had been read and executed, or whether Perl simply was not able to process the file. Here is a simplified version of my test case demonstrating this problem:

      use strict; use warnings; my $do_me="do_me.pl"; foreach my $ending ('',"\n") { print "Create file with",($ending?'':'out')," newline\n"; open(OUT,'>',$do_me) or die "$!"; print OUT "undef$ending"; close OUT; $!=0; my $result=do $do_me; print "Can not read $do_me: $!\n" if $!; print "Can not evaluate $do_me: $@\n" if $@; print("$result\n") if defined $result; }
      Actually, this is a simplified example derived from a real application.

      -- 
      Ronald Fischer <ynnor@mm.st>

        Since $! isn't set inside the file-to-be-evaluated, it must have been set by Perl while processing the file.

        Yes, and completely irrelevant. $! cannot be used to determine if an error occurred.

        Imagine that we are doing a file which (legally) returns undef, which might be reasonable if the code in the file is executed only for its side effects

        No, it's not reasonable. The "do" module should return true on success just like "require" modules.

        I do appreciate your point about undef as a legal return value, however an examination of the code sample from do shows usage expects a defined test on the return value, likely for this very reason. As I said, I suspect there is a try-catch in the file access subroutines, and so you are getting a false negative from an internal failure. You could add an extra layer of security with a -r test, guarantee an appended newline on your file name or guarantee a defined return value, but all of these are workarounds. I would be particularly hesitant to simply append the newline, as that strikes me as fragile.