Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:
We have a large app built in Catalyst, that has a very high startup overhead, thanks to a large number of database connections that have to be set up at the start of every run. It takes about 30 seconds or so before it starts to respond. It's only the startup that's affected by this; once it's running, there's no memory pressure or CPU stress.
Normally this isn't a big deal; we roll out new versions once every week or three, and deploy it on different boxes in turn, so it's always up. However, we also run a frequent cron script based on the app, and this gets to be a real problem. We run it with different parameters to process different tasks, sometimes every five minutes, and here, a 30-second startup cost gets to be significant. (Once again, after the app loads, the processing itself usually only takes a second or two.) In some cases, we need to run multiple versions in parallel, because some tasks have network delays from dealing with external APIs, so we'll do things like run one script to process odd-number IDs, another to run even-number IDs.
We've looked at some discussions here and on Stack Overflow about daemonizing the script, but it's not really clear how we'd control it the way we do now. That is, currently we might run a dozen versions from cron, with "process task A", "process task B", "process even-number task C's", "process odd-number task C's", etc. This is easy to manage; if we add a "task D", we add another cron job to "process task D"; the tasks that only need to run once a day get a cron job that does them at 3 a.m.; etc.
But communicating with daemons isn't something we have experience with--none of us are systems guys--and trying to write the logic into the script itself seems impossible. Are there guidelines for how to deal with this? Assume there's no way to reduce the startup costs.
|
---|