in reply to Re: large perl module
in thread large perl module

... and thus will reload your big module again and again

I think the effects of this largely depend on how things are configured. Let's say MaxRequestsPerChild is set to 10000 and an average request takes 500 ms(*) to be delivered, then the total time of service of one process will be >= 5000 secs. Assuming the reload time of the module is in the order of 5 secs, this will still be a ratio of only 1 : 1000 (in other words, acceptable).

___

(*)  I hadn't yet seen the OP's reply above when guessing that number (which would be more typical when serving end users over the web).

Replies are listed 'Best First'.
Re^3: large perl module
by minek (Novice) on Mar 04, 2010 at 22:27 UTC
    Within Apache it might be of course different,
    but judging from running a test perl script from command line, the load time of the script/module is below 0.5s
    And the MaxRequestsPerChild is actually set to 10000.
    So it should be fine..

    Thanks, Dan
      And then your webserver has to service 1000 requests per second and it goes down every 10 seconds. It is not just the loading of the module that needs to be taken into account, but also the time spent in cleanly destroying the child workers and restarting them. That can actually take a lot longer and thus you want to keep the total time as short as possible, by avoiding long loading times of modules. Every second saved helps.

      It is just a gut feeling, but a database solution feels more efficient to me.

      CountZero

      A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James