in reply to Re: Constant communication between processes
in thread Constant communication between processes

Why use shared memory instead of a pipe or a socket? In fact, since the server already uses a socket, you could completely avoid the Perl script.

From what I've read, it's much faster to communicate between processes using shared memory.

Re the php script: I should have explained that a little better.

Currently there is 50 requests per second from this php script and there is 4 servers doing this simultaneously at the moment. This will scale up to 10+ servers. So opening a connection for each one, due to the connection overhead, slows the whole process down to much. Hence why we are using a single persistent connection open between the two servers.

I'm not overly familiar with pipes beyond using for logging type stuff. Can they facilitate a request/return type system to an already running script?

  • Comment on Re^2: Constant communication between processes

Replies are listed 'Best First'.
Re^3: Constant communication between processes
by ikegami (Patriarch) on Mar 12, 2010 at 02:26 UTC

    Ok, given your situation and your claim that shared memory is faster, the proxy model might makes sense.

    Machine A
    
    PHP ⇐|
    PHP ⇐|  
    PHP ⇐|⇒ Perl Proxy ⇐|
    PHP ⇐|              |
    PHP ⇐|              |
                        |
    Machine B           |⇒ Perl Processor
                        |
    PHP ⇐|              |
    PHP ⇐|              |
    PHP ⇐|⇒ Perl Proxy ⇐|
    PHP ⇐|
    PHP ⇐|
    

    When I say "your claim that shared memory is faster", I don't mean to imply that it's false, just that I don't know it to be true or false. And it seems that you haven't included the time required to synchronise the local threads/processes and the time required to signal/detect a change in the shared memory.

    Can they facilitate a request/return type system to an already running script?

    A named pipe could, kinda, but a unix socket would be better.

Re^3: Constant communication between processes (timing unix sockets)
by ikegami (Patriarch) on Mar 12, 2010 at 02:50 UTC

    server.pl:

    #!/usr/bin/env perl use strict; use warnings; use Cwd qw( realpath ); use IO::Socket qw( AF_UNIX SOCK_STREAM SOMAXCONN ); use Path::Class qw( file ); sub done { exit(0); } $SIG{INT } = \&done; $SIG{TERM} = \&done; $SIG{HUP } = \&done; my $socket_path = file(realpath($0))->dir()->file('socket'); { my $server = IO::Socket->new( Domain => AF_UNIX, Type => SOCK_STREAM, Local => $socket_path, Listen => SOMAXCONN, ) or die("Can't create server socket: $!\n"); eval 'END { unlink $socket_path } 1' or die $@; while (my $client = $server->accept()) { # ... } die("Can't accept: $!\n"); }

    client.pl:

    #!/usr/bin/env perl use strict; use warnings; use Cwd qw( realpath ); use IO::Socket qw( AF_UNIX SOCK_STREAM ); use Path::Class qw( file ); use Time::HiRes qw( time ); my $socket_path = file(realpath($0))->dir()->file('socket'); my $num_connects = $ARGV[0] || 1000; my $stime = time; for (1..$num_connects) { my $client = IO::Socket->new( Domain => AF_UNIX, Type => SOCK_STREAM, Peer => $socket_path, ) or die("Can't connect to server socket: $!\n"); # ... } my $etime = time; printf("%.3fms per connection\n", ($etime-$stime)/$num_connects*1000);

    On a busy web server, I get 0.199ms per connection.

    I don't know how long connecting to shared memory, ensuring exclusivity and notifying the server, but I don't think it's worth the extra effort even if it is faster.