in reply to Re: Daemonizing (or otherwise speeding up) a high-overhead script?
in thread Daemonizing (or otherwise speeding up) a high-overhead script?

I agree, splitting background tasks into dedicated, small workers with proper job queueing is certainly the way to go.

In my systems, i have various "tasks to do" tables the worker work on. The workers run all the time, just waiting for new jobs to be scheduled. I also do this for time based scheduling. It's often times better to run the "do something every 5 minutes" stuff internally in the worker, instead of calling it from a cron job. And in many (if not most) cases it's really "once per hour" instead of "at the start of every hour". That way, you can spread out the server load a bit better.

Whenever i need a worker to react in somewhat of a realtime manner (for example, processing and printing an invoice after the user has finished input), i add an IPC (interprocess communication) "trigger" to start checking the database (or just doing whatever needs to be done) NOW.

Shameless plug: In my projects i use Net::Clacks for IPC, see also this slightly outdated example: Interprocess messaging with Net::Clacks

PerlMonks XP is useless? Not anymore: XPD - Do more with your PerlMonks XP
  • Comment on Re^2: Daemonizing (or otherwise speeding up) a high-overhead script?