in reply to Re^4: Does module size matter under mod_perl?
in thread Does module size matter under mod_perl?

The file is not read from disk for each request, but it is read from disk on startup, even if "startup" only happens once a month. The OP said he was wondering if it could have "any performance impact" (emphasis mine). Startup performance is a subset of performance.

More significantly, time to read the file from disk was simply the first thing that came to mind as something which could be affected by module size. At no time have I said (or intended to imply) that it was the only thing which could be affected. It was meant as an example of a difference, not an exhaustive list of all possible differences.

Even if my example is wrong (which it may or may not be, depending on how broadly you consider "performance"), my point is unaffected: If micro-optimization actually matters for your application, then you're probably better off porting to C rather than worrying about how to squeeze an extra microsecond out of your Perl code.

  • Comment on Re^5: Does module size matter under mod_perl?

Replies are listed 'Best First'.
Re^6: Does module size matter under mod_perl?
by ikegami (Patriarch) on Jul 08, 2014 at 21:24 UTC

    The OP said he was wondering if it could have "any performance impact"

    Do you really think he's interested in the performance of the startup of the web server? I doubt it.

    But if so, your answer is still completely wrong. We're easily talking seconds, not something so small it can't be measured. For virtually everyone using them, the entire point of using mod_perl or Fast CGI is to eliminate that delay you claim is unnoticeable.

    Refactoring into smaller modules isn't going to help. The same code still needs to be loaded and compiled.

      I get it. You hate my example. I get it. My example was probably poorly chosen. I get it.

      But you seem set on refusing to accept that the example was not the point of my answer.

      Loading one big file (one meaning of "module" in the Perl context) will not take the exact same amount of time, down to the attosecond, as loading several small files which are functionally equivalent.

      Running code from one big namespace (another meaning of "module" in the Perl context) will not take the exact same amount of time, down to the attosecond, as running code which is spread across multiple namespaces.

      If you look closely enough, refactoring always has an effect on performance, unless it is such a trivial refactor that the processor still runs the exact same opcodes.

      But, in practical terms, that generally doesn't matter. And, if you're performance-sensitive enough that it does matter, then you'll get more benefits from using a high-performance language than you will from moving code around between files and/or namespaces.

      That was the point of my answer: Don't waste your time worrying about micro-optimization. Which remains good advice even if I supported it with a poor example.

      Finally, if you're going to say I'm "still completely wrong", please at least point out the complete wrongness of things I actually said. I never said that load times are potentially unmeasurable. I said the difference in load time between loading a single file and loading equivalent code from multiple files may not be measurable:

      "the difference is completely unimportant, if there's even a measurable difference at all" (emphasis added)