in reply to Need suggestion on problem to distribute work

I am trying to understand your situation. Before suggesting any actual code, I'd like to sanity check my understanding. I think that you want to have multiple DB and remote server interactions underway at the same time? A possible scenario is like below.

The main program queues new work onto a queue that is accessible by multiple worker threads. There will be multiple Worker threads. If a worker thread is not busy and work is available, it accepts new work and processes the DB and remote server work items. If your DB and remote server can handle multiple operations at the same time, this will speed things up.

Ultimately there will be a maximum throughput. Some sort of throttle will probably be necessary on the main program so that the work queue doesn't grow to an infinite size. I suspect there will be other complications with error handling. But is this general idea what you seek?

Main Program: there is just one of these while (I don't know) { generate work item push work onto shared work queue } Worker Bee Thread: There will be N of these running in parallel #each worker gets its own connections connect to DB connect to remote server while (pop from work work queue, if queue not empty) { manipulate DB send to remote server }
Update: Clarification is needed on this point: Remote server connection and sending request is taking about 10ms due to latency Surely you are not connecting and disconnecting for each server request? Connect once, use many. However, 10ms for remote communication overhead doesn't strike me as particularly long. I work with some networks where a simple ping response time takes 60-70ms. BTW, you don't mention DB processing time, but that can be very significant. A DB commit is "expensive" and requires multiple disk operations. Search for "ACID DB". I suspect the DB operation takes longer than the "send to remote server" operation.

Replies are listed 'Best First'.
Re^2: Need suggestion on problem to distribute work
by perlfan (Parson) on Jun 14, 2020 at 23:31 UTC
    Work queue is also what I suggest. But don't use a database as the queue. Use something like redis' FIFO queue. You could get fancy and make a priority queue using sets, but sounds like you want straightforward, and I agree.

    The producer process puts work on the atomic queue, worker daemons spin and pop off work to do. Sure you could have the worker daemons fork off children to do the work, but as long as you have the atomic queue then you can just have any number of worker daemons checking for work to to do in a loop - so there is no need to get fancy with the worker processes. Redis (and the Perl client) is not the only way to do this, but it's the one I have the most experience with.

    As I stated above, don't use a database to serve the queue. You don't have to use Redis, but do-not use a database (terribly inefficient for this type of middleware).

    If you wish for the worker process to communicate back to the work producer, you can use a private Redis channel specified in the chunk of work. However, if you want real messaging you will be best to go with something built for that, like RabbitMQ or something similar but lighter weight.

    Work can be shoved into the queue by the producer in JSON or some other easily deserialized format; it can include a "private" redis channel or "mailbox" for the worker thread to send a message to the producer or some other listening. You could actually set up a private mailbox scheme so that the initial contact with work on the queue allows the producer and consumer to have any sort of meaningful conversation you wish.

    Also note, the 6.x version of redis supports SSL natively and some level of access controls. I'd use them if going over public internet or crossing any sort of untrusted networks.

      Nowhere did I say to use the DB as the work queue. In Perl there are ways to push an item onto a "thread-safe" array. Likewise threads can get an item off of this array in a thread-safe way. I guess I should have said "shift off of the array" instead of "pop". I would process requests in roughly FIFO order.
        At some point it will cross his mind. xD

        Also, I was just building on your suggestion rather than top posting. So don't take it as a criticism or think I was actually talking to you.

      Sir Mq or queue addition will be slow the process further I wish to separate the work out of main queue + want to get it done in as less time as possible. I am looking into POE which will scale itself once we send more work by forking itself. I got more than sufficient hardware resources but need to utilize them now :-)
Re^2: Need suggestion on problem to distribute work
by smarthacker67 (Beadle) on Jun 15, 2020 at 18:20 UTC
    All Thanks for your responses.

    When I say a remote server is its a FreeSWITCH server to whom I want to send work but its takes time so I cant send more work unless that work reaches to it. I wish to handle it separately.

    I connect with single database only eg I do 200 insertion + 200 times work sending to remote server due to this next bach has to wait. Wish to fork this out from main loop so that separate thread will take care of this,

    Since its calling related I have X numbers of servers so I need to connect them in round robin to distribute work. I hope this clarify :-)
    Updated the question as well,