in reply to Do multiple use statements across module dependency hierarchies require as little expense as multiple require statements?

Does package C get loaded 3 times?

No.

If C exports into into all three namespaces, does it slow things down?

Yes, exports take time, have a gander at Exporter.pm's source to see what I mean.

Going back to some of my code, I removed some use directives and put them as require directives inside subs and methods. This sped things up a lot for some procedures

This is most likely because use will be executed at compile time, meaning all use statements will be executed (and the modules loaded) no matter where they are (yes, even if they are inside subs). Whereas require will execute at runtime and a require statement within a sub will not be executed until that sub is run.

So I suspect that the speed up you are seeing is that now your modules are being loaded on-demand instead of all at compile time. This essentially spreads out the weight of the module loading (and potentially in some cases, does not do it at all).

Now, if this is a webapp (as you seemed to indicate) and you are planning to deploy this under vanilla CGI, then this kind of optimization is a win. However if you deploy this under mod_perl or FastCGI (or some other "persistent interpreter" solution) then you are better off sticking with use.

-stvn
  • Comment on Re: Do multiple use statements across module dependency hierarchies require as little expense as multiple require statements?
  • Select or Download Code

Replies are listed 'Best First'.
Re^2: Do multiple use statements across module dependency hierarchies require as little expense as multiple require statements?
by leocharre (Priest) on Jan 10, 2008 at 20:14 UTC

    Ah! Very wonderful of you to point out the mod_perl gotcha! Indeed, require would be a catastrophe under this environment? The calls would be unavailable or the whole thing would have to be recompiled, etc?

    Yes, as you speak. The modules are loaded on call, not by default. And that is why it runs fast, because what's not "used" is not sought.

    This is not just useful in cgi by any stretch of the imagination.
    This is also *incredibly* useful in command line apps that may be able to do various different things.
    Or maybe your script has a %50 chance of failure for some reason, and should exit as soon as possible.
    For example, things on cron, that maybe have to do an expensive task at any moment, and are checking every 60 seconds if this procedure should or not happen.
    The require would not be reached unless conditions are met, thus, quick and cheap.

    Really helps speed things up, with an interpreted language.

      Ah! Very wonderful of you to point out the mod_perl gotcha! Indeed, require would be a catastrophe under this environment? The calls would be unavailable or the whole thing would have to be recompiled, etc?

      In a persistent and forking environment, the more likely-constant items you load into memory before forking, the more you can share between processes with copy-on-write memory. If most processes will use a module and you don't load it before you fork, each one will suffer the stat calls and loading time and importing time and use non-shared memory to load the code. If you do load before the fork, you only pay for the loading and compilation time once, and all subsequent processes will transparently use the same memory pages.

        "catastrophe" is certainly over-stated. Since the extra disk interactions to load the module will still be fairly small and will only be done once over the hopefully thousands of requests that will be handled by one apached process, the impact of having to load a module in every apached child is likely not very significant.

        As for sharing memory, that can work to the extent that the things loaded into memory by the module land on pages that are never modified in the child processes. Given the nature of Perl, I suspect that only a minority of the extra memory allocated will stay shared for a significant length of time, perhaps only a tiny fraction of it.

        So the down-side to delayed loading under mod_perl is often going to be quite minor, I expect.

        So I'd lean toward delayed loading and use mod_perl start-up options to force pre-loading of widely-used larger pieces.

        - tye