Hello gurus,
I am developing a custom client/server protocol for a project. The client script will connect to a server script on another server via sockets. The server will listen with an open a persistent database connection and process requests as they come through, returning a boolean value for each request. That part of the development is fine.
The question I have is on the clients side. The client script needs to keep the socket open and listen for requests. The requests actually come from a php script, so I need a way that the php script can simply do a system call on an intermediary perl script, which communicates to the client script running with the open socket. The requests from the php script will be coming at around 50 per second and there is 4 servers doing this simultaneously at the moment. This will scale up to 10+ servers. So opening a connection for each one, due to the connection overhead, slows the whole process down to much. Hence why we are using a single persistent connection open between the two servers.
The solution I was playing with was using shared memory to communicate to each new php request (which was run from a php system call) and the client script what has the open socket. The issue I'm running into is that I need to poll the shared memory ID to see if a new request has come through and this eats up cpu resources. If I can find a way to "wait" for requests to hit the shared memory allocation then it should work fine.
If anyone has ideas on a solution to shared memory waiting or even a different approach all together that would be great!
PS. I am trying to determine from the php developer if the php script can open the socket itself, that way I don't need to do this, but there may be multiple threaded php scripts running which wouldn't allow that to work.
Thanks in advance
In reply to Constant communication between processes by citycrew
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |