in reply to Autovivification with require

This has nothing to do with autovivification. That term means perl automatically treating undef in an lvalue reference context as evaluating to a reference to a newly created item of the correct type (arrayref or hashref); see perlref for the definition.

The suggestion is to use a bareword package name instead of a string path and is saying that best practices would be to write a module (which would be contained in the file incl/common.pm) and then use require incl::common; (no extension) and let perl locate the file in one of the directories listed in PERL5LIB (see perlrun and perlvar for more on that).

The cake is a lie.
The cake is a lie.
The cake is a lie.

Replies are listed 'Best First'.
Re^2: Autovivification with require
by Bod (Parson) on Nov 20, 2020 at 15:39 UTC

    From the documentation for require

    In other words, if you try this:

    require Foo::Bar; # a splendid bareword
    The require function will actually look for the Foo/Bar.pm file in the directories specified in the @INC array, and it will autovivify the Foo::Bar:: stash at compile time.

    But if you try this:
    my $class = 'Foo::Bar'; require $class; # $class is not a bareword #or require "Foo::Bar"; # not a bareword because of the ""
    The require function will look for the Foo::Bar file in the @INC array and will complain about not finding Foo::Bar there. In this case you can do:
    eval "require $class";
    or you could do
    require "Foo/Bar.pm";
    Neither of these forms will autovivify any stashes at compile time and only have run time effects.


    This seems to imply that there is some implicit advantage of one form over another because one autovivifies and the other does not - it was this part of the documentation that gave rise to my question...

      I believe what that usage is trying to highlight is that if you have a bareword use Foo::Bar; or require Foo::Bar; then at compile time perl knows that there is a Foo::Bar package and that it will make some kind of entries in the package stash (*Foo::Bar and/or %Foo::Bar::) before continuing to compile subsequent code (which may affect things like "variable only used once" warnings). Also in the use case the import method would be called at compile time which would then have side effects (e.g. inserting entries in the calling package's namespace).

      If you use the string version the code in the file pulled in won't be compiled until the require statement is actually executed which will be after everything else has already compiled. It affects when some things happen; whether that's an "advantage" or not depends on what you're expecting to happen.

      ## Poor example: Cwd pulled in with use so %Cwd:: is fully populated $ perl -E 'say qq{Cwd stash keys: }, scalar %Cwd::; use Cwd; say qq{Cw +d stash keys after use: }, scalar %Cwd::; say qq{Config stash keys: } +, scalar %Config::;require "Config.pm"; say qq{Config stash keys afte +r requiire: }, scalar %Config::;' Cwd stash keys: 39 Cwd stash keys after use: 39 Config stash keys: 1 Config stash keys after requiire: 20

      The cake is a lie.
      The cake is a lie.
      The cake is a lie.

PM vs PL
by Bod (Parson) on Nov 20, 2020 at 16:00 UTC

    I was under the impression that best practice is to name files to be brought in with require as *.pl and those being brought in with use as *.pm. I have no idea where I got this notion from, it was a long time ago - at least 2 decades - and I have been doing it that way ever since.

    Almost every script I write has a require or three at the top for bringing in code I have created. Most have a few use statements for other people's modules. Over the years I have created a handful of OO modules which get included with a use statement.

    Have I been doing this wrong all these years or has best practice changed and I never noticed?

      I came late to the Perl game, but I've met required pl files in very old code. So I'd guess the best practice has changed.

      Update: I even discussed it here.

      map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]

        Yeah...I'm guilty of learning what I needed to learn to make things work, then not learning much else until I needed to make something else work. So having learnt to require *.pl files, I have just kept doing it.

        Maybe I could claim I write fashionably retro code :P

      I would like to mention a 3rd type of "include": a PMC file which is "compiled" Perl code, see e.g. Module::Compile (or: what's this PMC thingy?). (Edit: also this is of interest Perl v5.6.1 looking for .pmc files)

      I don't know whether this project is stil alive but I just confirmed that if a Test.pmc file exists alongside an Test.pm , use Test; will attempt to load Test.pmc first (question: can they be in the different INC dirs?).

      Caveat: Because my systems's module compiler (perl -MO=Bytecode,-H -MTest -e1 > Test.pmc) fails with: Can't locate object method "ix" via package "0" , I just touch Test.pmc and surely I get a segfault upon running my script as expected from loading rubbish. The important thing is (i believe) an attempt to load Test.pmc is done. When I delete Test.pmc all runs well, the Test.pm is loaded.

      bw, bliako

        The documentation for require does specify that *.pmc takes priority over *.pm files.

        Before require looks for a .pm extension, it will first look for a similar filename with a .pmc extension. If this file is found, it will be loaded in place of any file ending in a .pm extension. This applies to both the explicit require "Foo/Bar.pm"; form and the require Foo::Bar; form.

        The documentation for use doesn't seem to mention *.pmc files.

        Why would one want to pre-compile the code to include?
        My guess is that it gives better performance as the code doesn't need compiling when the script is executed and/or that it provides better security as it is significantly more difficult to look at compiled code and work out what it is doing and how...

Re^2: Autovivification with require
by Anonymous Monk on Nov 20, 2020 at 14:47 UTC
    Maybe another way to say it is that, if someday you needed to rearrange the source file structure for any good reason, but you used literal-string names containing file and directory names, you'd have a lot of tedious text changes to do throughout the code. Whereas you'd have a lot more flexibility and a lot less work to do if you'd referred to them as modules and let Perl find them for you.