in reply to Daemonizing (or otherwise speeding up) a high-overhead script?
There is PPerl, which basically implements a prefork server for arbitrary scripts. The idea is that it launches, does the costly initialization, and then forks into the background. Then, if you ever launch the script again, it will connect to the background server and skip the costly initialization.
The problem is, that fork and database handles (like all external resources) don't play well together and you have to reinitialize after each fork.
The other replies already have recommended frameworks like Mojolicious to implement a small server, and I think this is a sound approach. Personally, I would look at using any kind of job queue, be it directory/file based or database based. For example Minion is such a job queue that also has a monitoring UI etc.
This will mean you split your code into the script/frontend to submit jobs, and a "worker" component that does the costly initialization and then works on submitted jobs. The workers pick jobs from the queue and depending on machine usage etc. you can launch more workers or kill them.
Update: I have to retract my recommendation of Minion for this situation because it forks a new instance for every job. Forking a new instance for each job means connecting to the database for each single job. In the quick scan I didn't see a way to have one worker process multiple jobs before it quits the program.
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: Daemonizing (or otherwise speeding up) a high-overhead script?
by cavac (Prior) on Aug 24, 2023 at 10:27 UTC |