citycrew has asked for the wisdom of the Perl Monks concerning the following question:

Hello gurus,

I am developing a custom client/server protocol for a project. The client script will connect to a server script on another server via sockets. The server will listen with an open a persistent database connection and process requests as they come through, returning a boolean value for each request. That part of the development is fine.

The question I have is on the clients side. The client script needs to keep the socket open and listen for requests. The requests actually come from a php script, so I need a way that the php script can simply do a system call on an intermediary perl script, which communicates to the client script running with the open socket. The requests from the php script will be coming at around 50 per second and there is 4 servers doing this simultaneously at the moment. This will scale up to 10+ servers. So opening a connection for each one, due to the connection overhead, slows the whole process down to much. Hence why we are using a single persistent connection open between the two servers.

The solution I was playing with was using shared memory to communicate to each new php request (which was run from a php system call) and the client script what has the open socket. The issue I'm running into is that I need to poll the shared memory ID to see if a new request has come through and this eats up cpu resources. If I can find a way to "wait" for requests to hit the shared memory allocation then it should work fine.

If anyone has ideas on a solution to shared memory waiting or even a different approach all together that would be great!

PS. I am trying to determine from the php developer if the php script can open the socket itself, that way I don't need to do this, but there may be multiple threaded php scripts running which wouldn't allow that to work.

Thanks in advance

  • Comment on Constant communication between processes

Replies are listed 'Best First'.
Re: Constant communication between processes
by BrowserUk (Patriarch) on Mar 12, 2010 at 02:56 UTC

    Does this describe the situation?

    _____ _____________ _______________ __________ |PHP|----|Perl script|--------->| Perl "Clent"| | | ----- ------------- | | | | _____ _____________ | | | | |PHP|----|Perl script|--------->| | | | ----- ------------- | |------->|Remote | _____ _____________ | | |server | |PHP|----|Perl script|--------->| | | | ----- ------------- | | | | _____ _____________ | | | | |PHP|----|Perl script|--------->| | | | ----- ------------- --------------- ----------

    Four (to 10) PHP servers, call a perl script (50X/sec) to send something to a Perl "client" that then forwards those messages to a remote server.

    The perl "client" receives boolean responses from the remote server. Does it return those responses to the PHP servers?

    The Perl "client" is there to prevent the perl script from having to reestablish a connection to the remote server every time. But having the perl script talk to the perl "Client" via socket would just move the problem back a stage, so your hoping that they can communicate via shared memory. The perl script is involved because you don't know if you can do socket communications directly from PHP.

    I find it unlikely that starting a new process and establishing a connection to the shared memory will be any faster than (say) connection to a server socket within the perl "client".


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
Re: Constant communication between processes
by ikegami (Patriarch) on Mar 12, 2010 at 01:47 UTC

    Why use shared memory instead of a pipe or a socket? In fact, since the server already uses a socket, you could completely avoid the intermediary Perl script.

    Current design: PHP client ⇔ server Perl Proxy client &hArr Perl Processor server

    Simpler design: PHP client ⇔ server Perl Processor

    but there may be multiple threaded php scripts running which wouldn't allow that to work.

    That makes no sense. You'll need to handle the fact you have multiple clients no matter what communication mechanism you use.

      Why use shared memory instead of a pipe or a socket? In fact, since the server already uses a socket, you could completely avoid the Perl script.

      From what I've read, it's much faster to communicate between processes using shared memory.

      Re the php script: I should have explained that a little better.

      Currently there is 50 requests per second from this php script and there is 4 servers doing this simultaneously at the moment. This will scale up to 10+ servers. So opening a connection for each one, due to the connection overhead, slows the whole process down to much. Hence why we are using a single persistent connection open between the two servers.

      I'm not overly familiar with pipes beyond using for logging type stuff. Can they facilitate a request/return type system to an already running script?

        Ok, given your situation and your claim that shared memory is faster, the proxy model might makes sense.

        Machine A
        
        PHP ⇐|
        PHP ⇐|  
        PHP ⇐|⇒ Perl Proxy ⇐|
        PHP ⇐|              |
        PHP ⇐|              |
                            |
        Machine B           |⇒ Perl Processor
                            |
        PHP ⇐|              |
        PHP ⇐|              |
        PHP ⇐|⇒ Perl Proxy ⇐|
        PHP ⇐|
        PHP ⇐|
        

        When I say "your claim that shared memory is faster", I don't mean to imply that it's false, just that I don't know it to be true or false. And it seems that you haven't included the time required to synchronise the local threads/processes and the time required to signal/detect a change in the shared memory.

        Can they facilitate a request/return type system to an already running script?

        A named pipe could, kinda, but a unix socket would be better.

        server.pl:

        #!/usr/bin/env perl use strict; use warnings; use Cwd qw( realpath ); use IO::Socket qw( AF_UNIX SOCK_STREAM SOMAXCONN ); use Path::Class qw( file ); sub done { exit(0); } $SIG{INT } = \&done; $SIG{TERM} = \&done; $SIG{HUP } = \&done; my $socket_path = file(realpath($0))->dir()->file('socket'); { my $server = IO::Socket->new( Domain => AF_UNIX, Type => SOCK_STREAM, Local => $socket_path, Listen => SOMAXCONN, ) or die("Can't create server socket: $!\n"); eval 'END { unlink $socket_path } 1' or die $@; while (my $client = $server->accept()) { # ... } die("Can't accept: $!\n"); }

        client.pl:

        #!/usr/bin/env perl use strict; use warnings; use Cwd qw( realpath ); use IO::Socket qw( AF_UNIX SOCK_STREAM ); use Path::Class qw( file ); use Time::HiRes qw( time ); my $socket_path = file(realpath($0))->dir()->file('socket'); my $num_connects = $ARGV[0] || 1000; my $stime = time; for (1..$num_connects) { my $client = IO::Socket->new( Domain => AF_UNIX, Type => SOCK_STREAM, Peer => $socket_path, ) or die("Can't connect to server socket: $!\n"); # ... } my $etime = time; printf("%.3fms per connection\n", ($etime-$stime)/$num_connects*1000);

        On a busy web server, I get 0.199ms per connection.

        I don't know how long connecting to shared memory, ensuring exclusivity and notifying the server, but I don't think it's worth the extra effort even if it is faster.

Re: Constant communication between processes
by citycrew (Acolyte) on Mar 12, 2010 at 12:16 UTC

    Thanks for the feedback guys! As a result I have done some benchmarking on the following:

    • 200 iterations on opening a simple perl script - took 7 seconds (28.5/s)
    • 5000 iterations on opening a socket inside a running perl script - took 10seconds (500/s)

    So basically the overhead on just opening a perl script negates the use of trying to use shared memory to communicate between a persistent open socket script.

    I will talk to the php developer and try to move forward with opening the socket directly from the php script. Even if the php script is forked off and each process has to open its own socket, it will still be faster

    Thanks again for your feedback ikegami and browserUK

Re: Constant communication between processes
by cdarke (Prior) on Mar 12, 2010 at 11:47 UTC