Re: Unusual(?) server
by pg (Canon) on Nov 18, 2002 at 06:47 UTC
|
The third approach I can think of is, to use UDP instead of TCP. The down side of this is that, the UDP protocol does not guarantee that the packet will be successfully delivered; The up side is that, you don't need to establish connection from time to time, or statically maintain connections. The decision really depends on whether you can tolerate the lost of some packets, and how good your network condition is.
| [reply] |
|
|
| [reply] |
Re: Unusual(?) server
by sauoq (Abbot) on Nov 18, 2002 at 05:19 UTC
|
There is nothing all that unusual about this except for your backward terminology. You are simply talking about running many servers and using a single client to talk to all of them.
Either of your two ways may be best depending on your needs. Feel free to tell us more.
-sauoq
"My two cents aren't worth a dime.";
| [reply] |
Re: Unusual(?) server
by djantzen (Priest) on Nov 18, 2002 at 05:36 UTC
|
As a general principle a well-behaved application will release its resources as soon as it can, which favors your first option. So assuming that the period in "periodically" is greater than a couple of minutes, almost certainly that is the better approach. At the same time this means that your clients will effectively be servers, with long-running processes waiting to service requests from your querying server (which is really a client). So, you'll need them to start when the client (that is, server) machine boots.
Now, I take it that the period of time between queries is fairly dynamic. If however it operates at specifiable windows, or if the very next connection time is known, then you could have a client (using your original terminology) connect based on a schedule. For instance, at client boot the server is contacted, at which point it is instructed as to when next to connect. This really depends on the situation though, so I can't say much more.
| [reply] |
Re: Unusual(?) server
by Aristotle (Chancellor) on Nov 18, 2002 at 08:41 UTC
|
Let's see. You have a 100 slaves and one master. The master needs to collect data from each slave once a minute. Does the master always collect a piece of data per minute per slave? If so, it is not obvious why you need the master to initiate the connections; you could implement this as a regular server that waits for incoming connections. It just needs to be aware of how many clients it has, total, and keep track of which ones have fallen out of schedule.
100 TCP connections a minute sounds quite moderate; any intranet webserver has to manage more.
May I ask what it is you're trying to do? If it's a monitoring script, you needn't reinvent the wheel; there are very capable packages like Nagios (formerly NetSaint), mon and Big Brother freely available for the job.
Makeshifts last the longest.
| [reply] |
|
|
| [reply] |
|
|
All of the monitoring tools I mentioned have the ability to react to defined conditions by executing configured commands. You could f.ex launch a script that invokes a command on the slave using SSH.
mon might be interesting as it's written in Perl itself.
The other option might be to write just the part that sends commands to clients but use a monitoring package to do the regular tasks. As the control script runs on the same machine as the monitor, it could peek at the monitor's data and occasionally initiate actions on its own.
Makeshifts last the longest.
| [reply] |
|
|
|
|
Re: Unusual(?) server
by Nygeve (Acolyte) on Nov 18, 2002 at 06:41 UTC
|
Ok. You (both) are right about my backward terminology.
Well, what actually bother me:
Using any model i've described, i have to use child processes. In the first case i have to spawn and reap every minute a number of processes, create and destroy a number of sockets.In the second case i have to support a number of processes running constantly and a number of socets opened.
What case is more reliable, consume less resource etc? | [reply] |
|
|
Forget to insert this paragraph :)
I think i must say what actualy have to do this stuff and what for this. Clients are running on a number machines (about 100), all they have to do is to know it's current state(0,1). This information controlled, calculated and what ever by the server. Periodicaly means every 1 min. Looking at this model you're see an usual client-server application. Unusual is that there are situations when server must be able instantly send new state to the client, not waiting next connection round.
| [reply] |
|
|
Given your needs, I think that pg hit the nail on the head in his post when he suggested UDP.
To avoid the terminology problem, lets call the single machine the collector and the 100 machines emitters.
Have the collector listen on a single UDP socket for packets from the emitters. Each time it gets a packet it updates that emitters status in a hash with a timestamp. Inside the listen loop, have the collecter run through the list looking for an emitter with a timestamp more than a minute old. If it finds one, have it attempt a TCP connect to that emitter and request the status, if the connect fails or the status isn't recieved, raise the alarm.
If you have the emitters send their status at say twice the frequency of the timeout period, then you allow for some packets to get dropped without raising false alarms.
With this model, there is no need for child processes, reaping or hundreds of connections and ports. Just one listening UDP port and a low-frequency, transient TCP connection that is only used when the network gets flaky. You might benefit from making that a child process, though if you are running on a threaded perl, it would make more sense and use less resources to use a thread.
On a threaded Perl, you could also have the verify timestamps loop run in a seperate thread using read-only access to the hash, whilst the listening loop did the updating without needing much in the way of serialisation.
Okay you lot, get your wings on the left, halos on the right. It's one size fits all, and "No!", you can't have a different color.
Pick up your cloud down the end and "Yes" if you get allocated a grey one they are a bit damp under foot, but someone has to get them.
Get used to the wings fast cos its an 8 hour day...unless the Govenor calls for a cyclone or hurricane, in which case 16 hour shifts are mandatory.
Just be grateful that you arrived just as the tornado season finished. Them buggers are real work.
| [reply] |
|
|
|
|
|