stewe has asked for the wisdom of the Perl Monks concerning the following question:

Dear Monks,

i come in shame as i have to admit that i use perl combined with mysql in a way that cant be right. At least it feels so wrong.
I've got an Application which is written in Perl embedded into Apache. I've got some fancy scripts which i call agents. Now : When there are certain things happening in Apache like i get an request from the outside world and i have the urge to tell it my mighty agent what i do is insert a string of JSON into a table which i call "do".

My Agent does something like this :
while(1){ foreach my $p (@{$dbh->selectall_arrayref(qq|SELECT id, do, ts FROM + do|, {Slice=>{} })}){ # Do what the JSON is telling me } sleep 1; }
By now u know what i mean with it feels wrong in any possible way. So i started digging : Signals, Mysql Triggers, POE, Anyevent, IO:: and so forth. What would you do or maybe have done to implement something like that. Dont worry all i need is some direction which would be best and i will read from there but im overwhelmed by posibilitys without havein any Idea what would really work in production later which actually works ... not like Threads when i looked at them 7 Years ago ^^

Any guidence would be appreciated

Replies are listed 'Best First'.
Re: Constant SQL Querys to send "signals"
by GrandFather (Saint) on Jun 20, 2016 at 21:24 UTC

    It would be more usual to spin up a task to handle the request immediately rather than having a task sitting spinning its wheels until a request comes and and triggers it into action.

    If the triggered task is long running and you want to decouple it from the response handler for a page have a look at some of the results in the following super search: ?node_id=3989;HIT=long%20running;re=N;Wi

    Otherwise, describe your big picture problem so we can help with that rather than help with implementing a solution to the wrong "answer".

    Premature optimization is the root of all job security
Re: Constant SQL Querys to send "signals"
by kennethk (Abbot) on Jun 20, 2016 at 21:52 UTC
    So first off, in general I think that if it works, it's good. Which is not to say it couldn't be better; but is fixing this worth your time?

    GrandFather's right that it would be more natural to just spin up a new process upon need, rather than relying on something sitting in memory chewing up resources. Auxiliary concerns include do you need your "mighty agent" to have persistent memory? Do requested tasks need to be performed sequentially? Does ordering of those tasks matter? Are there security/permissions aspects to consider (i.e. does your "mighty agent" need greater permissions than your other agents?).


    #11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.

Re: Constant SQL Querys to send "signals"
by perlfan (Parson) on Jun 21, 2016 at 21:58 UTC
    If I am understanding correctly what you want is a producer-consumer (work queue model); I'd recommend using Redis and a worker daemon that manages a set of child processes to do the work.

    Here's general Perl/Redis related talk on the matter.

    Work Queueing With Redis

    Basically, the web request results in "work" getting pushed on to a queue. There are then worker daemons that pop the work off the queue. It's at this point that your INSERTs happen.
Re: Constant SQL Querys to send "signals"
by stewe (Initiate) on Jun 21, 2016 at 13:03 UTC
    Indeed my "question" was a big broad. Sorry for that.
    The thing is i have this agent running which holds a connection to a certain enpoint via "HTTP" for noumerous clients. This endpoint allowes only one Connection per host for all clients i have. Thus spawning numerous scripts is not an option as they would constantly block each other.

    Therefor i have this agent which does one thing at a time. At the same time the agent, every time called, has to fetch alot of information from DB which are equal for every client which is a nother reason why i dont want to run scripts wich call for the same kind of information over and over. This seems to me like an total unnecessary overhead.

    Dont get me wrong the agent itself does a great job but the way i pass "commands" to it feels stupid when there are could be an event driven system.
      You could actually accomplish this with a lock file and a queue directory.

      Every time a new request comes in, the client transaction writes a JSON file into the queue directory. For the sake of no collisions, the file name can be a timestamp followed by some HTTP identifier -- perhaps the IP address.

      The request script can then check if the agent is running by seeing if an agreed upon file is locked. If it is not, the request script forks off the agent. The advantage of a lock file here is that if the script exits abnormally, the lock is dropped, and so the agent will respawn on next request.

      The agent moves through the queue, unlinking after each request is processed. Once the queue is empty, it exits.

      If you do not require realtime response, you can also do this as a cron job. Fewer moving parts.

      Finally, if you are worried about the DB fetches and they do not need to be current, you can use Storable or just use a local JSON file as cache.


      #11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.

        Thx for the input. In my oppionion its goes down the same road if i query a memory table every second or list the content of a directory every second when i think it costs more to list the content of a directory over and over AND have to open the files to get the content. thats why i was looking for an event driven system.