Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

Re: Resource pools and concurrency

by Moron (Curate)
on Jun 27, 2007 at 09:59 UTC ( [id://623571]=note: print w/replies, xml ) Need Help??


in reply to Resource pools and concurrency

Can you say what the existing interface to the registry modules is like - is it persistent? is it blocking? And what platform is it running on?
__________________________________________________________________________________

^M Free your mind!

Replies are listed 'Best First'.
Re^2: Resource pools and concurrency
by mattk (Pilgrim) on Jun 27, 2007 at 11:09 UTC
    At the moment, we have a few background processes that continually poll the registries to deal with anything that needs (near) real-time processing. There are also customer facing tools (APIs, web pages) that use the same modules and these would mainly benefit from the speed increase. The daemons would just benefit from a slightly more abstracted design. Since it takes 4 or 5 seconds to set up a connection, and only 0.5s to make a query, it would be ideal to keep the handles alive and share them when needed. And they're blocking right now... It would be a pretty big timesink to move all the existing modules onto non-blocking IO. The box is running a 2.4.22 kernel, Perl 5.8.1.
      First idea is I would reject the idea of trying to share handles with a single daemon - why not clone the daemons instead and have a transaction monitoring layer that sorts out which daemons are busy or not - keep a stock of so many available daemons of a particular kind (say 8) that have completed initiation but are not yet servicing requests, so that if two are busy handling requests, a ninth and tenth start initiating so that the stock control count of 8 identical daemons (for example) is always ready to handle new requests that are not yet received. When more than the required stock of clones is ready for new requests, kill off the excess to control their number. The transaction monitoring layer needs to identify requesters and keep a table of which cloned processes are allocated which requests and which are free.

      Update: I have wound backward my thinking to the functional design stage which I myself tend to want to be in a feasible state before I feel safe in suggesting a choice in regard to such clones being independent processes, forks, threads or POes.

      __________________________________________________________________________________

      ^M Free your mind!

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://623571]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (7)
As of 2024-04-23 15:09 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found