in reply to Rapid inter-machine communication on internal network

Neet abstraction/description of your challenge! Based on what you have described, I would wrap my initial (how many bananas) queries individually and use a dispatch routine to keep track of when I get all answers back, maintaining an open session to each of my back end servers. Then do the same with my second query on the still open sessions. Once the answers are recieved, close the sessions, unless your asking the questions every few seconds...

There are a lot of open questions here though:

  1. why the need for speed? Because there are many such queries? or because the number of bananas on each machine or their price changes rapidly?
  2. how important is the time between answers to the query from machine to machine?
  3. is an (held)open session debtrimental?

Hope I didn't completely miss the point.

...the majority is always wrong, and always the last to know about it...

  • Comment on Re: Rapid inter-machine communication on internal network

Replies are listed 'Best First'.
Re^2: Rapid inter-machine communication on internal network
by Anonymous Monk on Oct 30, 2006 at 15:52 UTC
    why the need for speed? Because there are many such queries?

    Split-second response time is of great value to the end user in this case. They really, really need to know the price of bananas right now. And yes, there are a lot of queries. Everybody loves bananas!

    how important is the time between answers to the query from machine to machine?

    For some commodities, it's easy to calculate a price and the worker nodes will finish their calculations quickly. For others it's tougher, and the worker nodes will have to churn for a while.

    When the worker nodes can finish their tasks quickly, it's important that the inter-machine communication time doesn't become a bottleneck that degrades the apparent response time from the user's perspective.

    Also note that because we need to know the total number of bananas across all nodes before any node can start calculating a price, we have a situation where the worker nodes as a group are only as fast as their slowest member. The same situation comes into play when calculating the final price to serve to the user -- we have to know what all the nodes want to charge before we can determine the final price. This is especially important if only one node has bananas.

    So we need a strategy that has good worst-case-scenario performance.

    is an (held)open session debtrimental?

    I don't know. I've never done any sockets programming.

    Probably it's better. When would it be bad?

      Ok, so you have a large number of repetative queries that need to produce fast results on demand. Interesting that you stated "apparent" response time. That would imply that some fudging could take place.. but I won't go there.

      My experience with this is in using Perl::DBI and MySQL. Because the answer you are passing back from the DB servers is basicly a single value, bandwidth should not be a problem. Holding the connection open (persistent) reduces time lag a great deal. The only time you would not do that is if you are pushing the number of allowed connections to your DB server(s).hmmmmm ...

      You have busy DB servers so I would try to use persistent connections for the least amount of time. Maybe just make sure that you close each connection as soon as you get the second answer back from each server so that you are not hogging connections.

      Percieved time delay is going to be database answer dependant... not network connection dependent... (IMO). You might also keep your fast query answers on the "answer" server(the one that calcs the averages) for as long as there is an open connection to you slowest server, not allowing any further connections to the fast servers until the slowest one responds, do the calcs, then close that last connection. That way you don't end up with a bunch of meaningless queries clogging up the network.

      Hope this makes sense :-)

      ...the majority is always wrong, and always the last to know about it...