I suppose that it is possible that in order for this application to scale to really big levels, a fundamentally different approach may be needed rather than just making what you have now faster? I don't know.
A brief description of what you have working now, the benchmarking that you've done and the problems that are happening now as this application scales could be useful in "thinking outside the box".
You say that this huge hash lookup feeds some web application.
Normally a web app doesn't require nanosecond response times. A vast
number of transactions per minute with an "acceptable" response time
for each request to the user is what usually matters. Some tens of
milliseconds typically won't matter at all. A human eye blink takes
300 ms or so, our hearing can detect say 50 milliseconds difference
between 2 different voice prompts. This sort of time frame allows some
ms devoted to IPC to go on.
I am wondering if some distributed DB that uses a pile of smaller machines in a distributed network rather than this single 200GB super monster, could be considered? That could provide further scalability and redundancy? Maybe the DB lookup needs to be more complex than just one hash key at a time? (get what you need for the page all in one transaction).
Sorry that I don't have a simple answer that says "x". I am curious since your questions over the years seem to have a common theme that is hard to solve easily.
In reply to Re: Highly efficient variable-sharing among processes
by Marshall
in thread Highly efficient variable-sharing among processes
by cnd
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |