Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

Re^5: How to maintain a persistent connection over sockets? (Amazing! Multi-tasking without multi-tasking!)

by BrowserUk (Patriarch)
on May 05, 2012 at 05:43 UTC ( [id://969014]=note: print w/replies, xml ) Need Help??


in reply to Re^4: How to maintain a persistent connection over sockets?
in thread How to maintain a persistent connection over sockets?

That code is remarkable! Also, unfortunately badly wrong, but I'll get back to that.

You have succeeded in writing a multi-tasking server without using any form of multi-tasking. Neither threading, nor forks, nor polling; nor an event loop!

It is utterly, utterly amazing. It took me quite a while to understand just how it achieved that. I now understand your thread title!

It is even more amazing that it achieves the throughput that it does; but is unsurprising that it is not meeting with your expectations.

Your server is resolutely single tasking. And it is also quite difficult to explain how it manages to give the appearance (and indeed, the actions) of being multi-tasking, in terms of the code alone, so I'm going to resort to an analogy.

How can you conduct two (or more) 'concurrent' conversations using one phone that has neither call-waiting; nor conferencing facilities?

The solution is to ask the other parties to disconnect and redial after each snippet of conversation. One person rings, you say "Hello"; they hang up and redial; when you pick up they reply; then hang up and re-dial; this time when you pick up, you reply; and they hang-up and redial; and so on until the conversation is complete.

And if two or more people follow this procedure, then you will be able to hold 'simultaneous' conversations with all of them. They'll just be very slow and disjointed conversations.

That is exactly analogous to how your server is "working".

I am truly surprised at how you arrived at this solution; and totally amazed at how efficiently it actually works. I guess it personifies the old adage about computers being able to do everything very, very quickly; including the wrong thing :)

Of course, it is unsustainable for an application such as yours. You will need to use some form of multi-tasking.

This comes in (essentially) 4 forms with the following traits:

  1. Event (select) loop.

    Ostensibly simple; lightweight, and efficient.

    The downside is that all state is global and all communications from all clients go through a single point.

    My analogy is having a single telephone and receptionist who has respond to every call and perform all work to satisfy all inbound queries and relay all outbound information.

    Works well if the inbound queries can be answered immediately with little effort, but falls down when answering a query requires more effort.

    Either every other caller has to wait while the receptionist resolves each query, no matter how long it takes; or the receptionist has to keep interrupting her efforts to resolve the query in order to service other callers.

    The first approach means that many clients will wait a long time, even if their queries are fast, whenever a hard to resolve query gets in before them.

    The second approach means that long queries take even longer, because the work effort to resolve it keeps getting interrupted by new callers.

  2. Cooroutines.

    I won't discuss this much as I consider it a retrograde step. Like going back in time to Windows 3.0. Only works if everyone cooperates; and they usually do not.

  3. Multi-processing (forking).

    Can be relatively efficient, even for long queries, because each caller gets their own process to respond to them. The downside is, that responder cannot easily communicate with the receptionist, or other responders.

    Falls down completely for write to the shared data, because the child process cannot modify the parents copy.

    Like having a modern automated switchboard where each new caller is routed directly to the next available agent. The trouble is, each agent sit isolated in their own room with only a copy of the data for reference. They can answer read-only queries, but cannot modify the data that the other agents see. And any nodifications they do make cannot be seen by the other agents.

  4. Multi-threading.

    Similar to the above, in that each caller gets their own, dedicated agent, but now all the agents are in the same room and can easily communicate between themselves. They can all make modifications to the shared data; and all can be aware of the modifications made by others.

    The downside -- for a pure Perl, iThreaded implementation -- is that the shared data is (effectively) duplicated for each concurrent client. That makes for extra memory use by the shared data and for slow(ish) inter-agent communications.

    The upside (of iThreads) is that only that data that needs to be shared is, and locking is simple (if not exactly fast); which makes it far easier to ensure that the agents don't trample on each others state accidentally and removes most if not all the potential for the classical threading nasties.

    Perl threading is far simpler than tradition, all state shared threading. The penalty you pay for that increased simplicity is in memory and performance.

My preferred solution for your application would be a combination of two of the four. Specifically, I would run a single phone&receptionist (a select loop) within a thread. That select loop would take care of all the communications with the clients, but would hand off queries to a pool of agents (work threads).

That allows the receptionist to respond immediately to new callers and inbound queries and modification requests from existing clients; whilst the pool of agents (work threads) take care of the doing the actual work. The pool can be tailored (scaled) to fit the available hardware (number of cores; amount of memory), on a case by case basis; whilst being able to both reference and make modifications to the shared data.

IMO, this will deliver the best combinations of responsiveness and functionality for your scenario.

Give me a few days and I'll get back to you with demonstrations of the 3 main candidates plus my preferred hybrid solution.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

The start of some sanity?

Replies are listed 'Best First'.
Re^6: How to maintain a persistent connection over sockets? (Amazing! Multi-tasking without multi-tasking!)
by flexvault (Monsignor) on May 05, 2012 at 08:14 UTC

    BrowserUk,

      Your server is resolutely single tasking. And it is also quite difficult to explain how it manages to give the appearance (and indeed, the actions) of being multi-tasking, in terms of the code alone, ...

    And it has to be. It is the cache server and the single point of all I/O for one environment or class of database(s). All independent processes call this for reading and writing to disk. If it's a write, then it updates the cache and adds to a queue to update the record on disk (child process). If it's a read, it checks the cache and if present it returns the cached copy or does the I/O to get the record if it exists.

    Background: The Classic Problem!

    Lots of locking has to be used to cache a database between processes. This is exactly where Berkeley Db fails without a separate ( user provided ) locking mechanism. Berkeley DB uses a 4KB root that is shared by all processes, and you have a built-in race condition.

    So by forcing all cache activity into one process, and only that process locks/unlocks the cache and the related databases, you force an orderly environment. The cache server can have children that lock/unlock the tree that they are working on.

    Now the cache is actually a HoH. For instance, you could have 30 databases in one environment with 400 users and each database and each user has a set of hashes. The user hashes are mostly maintained in the client ( calling ) processes. But the database hashes can grow large, so they have frequency activity as well and data buffers. My first implementation used arrays. But Perl's 'exists' and using hashes has given the performance I wanted and needed.

    When I run benchmarks for the single-user version, the core is maxed out at 100%. When I benchmark the multi-user, the cores run at 8 to 10%, so I'm not utilizing the cache server as well as I could.

    Just to give you the correct picture.

    Thank you

    "Well done is better than well said." - Benjamin Franklin

      And it has to be. ... So by forcing all cache activity into one process, ... you force an orderly environment.

      No it doesn't. You are confusing multi-tasking with multi-process.

      You can multi-task within a single process, and a single thread, using a select loop. Though the apis are badly documented and so awkward to use that even those that consider themselves experts often use them wrongly. That combined with architectural straight-jacket it forces you into, mean I cannot recommend it. At least not on its own.

      Equally, if you start a thread within your server to converse with each client, then you are still running in a single process, and can still enforce an orderly environment.

      The upside is that your clients no longer have to disconnect after every communication in order to break out of the inner loop within the server and allow other clients to communicate. That is what you are doing now, and it is the source of your multi-client performance woes.

      By way of example, this very simple (crude) threaded server can easily sustain concurrent conversations with 100 clients, and maintain an exchanges (client request/server reply) rate of close to 1000 exchanges per second to all of them:

      #! perl -slw use strict; use threads; use IO::Socket; $\ = $/ = chr(13).chr(10); my $lsn = IO::Socket::INET->new( Reuse => 1, Listen => 1, LocalPort => + 12345 ) or die "Server failed to create listener: $^E"; print "Server listener created"; while( my $client = $lsn->accept ) { print "Server accepting client connection"; async { while( my $in = <$client> ) { chomp $in; ## printf "\rServer echoing client input: '%s'", $in; print $client $in; } print "Server shutting down"; shutdown $client, 2; close $client; }->detach; }

      And here is a simple client:

      #! perl -slw use strict; use Time::HiRes qw[ time ]; use IO::Socket; our $R //= 1; $\ = $/ = chr(13).chr(10); my $svr = IO::Socket::INET->new( "localhost:12345" ) or die "Client: First client connect failed $^E"; print "Client connected"; my $last = time; my $exchanges = 0; while( 1 ) { print {$svr} "Hello"x $R or die "$! / $^E"; chomp( my $in = <$svr> ); ++$exchanges; if( int(time()) > int($last) ) { printf "Rate: %.f exchanges/sec)\n", $exchanges / ( time() - $last ); $last = time; $exchanges = 0; } }

      And runs showing typical exchanges rates for 1, 3, and 10 concurrent clients:

      [10:18:14.83] C:\test>for /L %i in ( 1,1,1) do @start /B t-sockCC -R=1 [10:22:34.94] C:\test>Client connected Rate: 21485 exchanges/sec) Rate: 22023 exchanges/sec) Rate: 21923 exchanges/sec) ...

      3 clients:

      [10:22:38.82] C:\test>for /L %i in ( 1,1,3) do @start /B t-sockCC -R=1 [10:22:43.80] C:\test>Client connected Client connected Client connected Rate: 16958 exchanges/sec) Rate: 16606 exchanges/sec) Rate: 17124 exchanges/sec) Rate: 17962 exchanges/sec) Rate: 17832 exchanges/sec) Rate: 17982 exchanges/sec) Rate: 18066 exchanges/sec) Rate: 17963 exchanges/sec) Rate: 18069 exchanges/sec) Rate: 17956 exchanges/sec) Rate: 17953 exchanges/sec) Rate: 17917 exchanges/sec) ...

      And 10 clients:

      [10:22:47.85] C:\test>for /L %i in ( 1,1,10) do @start /B t-sockCC -R= +1 [10:22:54.26] C:\test>Client connected Client connected Client connected Client connected Client connected Client connected Client connected Client connected Client connected Client connected Rate: 5931 exchanges/sec) Rate: 5852 exchanges/sec) Rate: 6870 exchanges/sec) Rate: 5446 exchanges/sec) Rate: 6356 exchanges/sec) Rate: 6394 exchanges/sec) Rate: 6207 exchanges/sec) Rate: 6286 exchanges/sec) Rate: 6730 exchanges/sec) Rate: 5699 exchanges/sec) Rate: 6337 exchanges/sec) Rate: 6273 exchanges/sec) Rate: 6373 exchanges/sec) Rate: 6134 exchanges/sec) Rate: 6138 exchanges/sec) Rate: 6139 exchanges/sec) Rate: 6103 exchanges/sec) Rate: 6052 exchanges/sec) Rate: 6095 exchanges/sec) Rate: 6289 exchanges/sec) Rate: 6236 exchanges/sec) Rate: 6482 exchanges/sec) Rate: 6468 exchanges/sec) Rate: 6274 exchanges/sec) Rate: 6461 exchanges/sec) Rate: 6413 exchanges/sec) Rate: 6241 exchanges/sec) Rate: 6207 exchanges/sec) Rate: 6172 exchanges/sec) Rate: 6241 exchanges/sec) ...

      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      The start of some sanity?

        BrowserUk,

        W O W !

        Let me look at this for a while. Looks good!

        Thank you

        "Well done is better than well said." - Benjamin Franklin

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://969014]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others musing on the Monastery: (8)
As of 2024-03-28 09:58 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found