in reply to Re: CB stats, Last hour of CB
in thread CB stats, Last hour of CB
There are a number of considerations here. First, to your point of debugging, yes, multithreading is generally much harder to debug. I had a conversation with a monk many years ago on this topic where he was advocating perl threads for something and using a shared variable to send back all the information, and I smelled some male-bovine manure, so I tested it out, and showed that it was, in fact, problematic. He doubled down, and eventually I gave up having proven that multi-threading is harder than it appears. As I recall, he's no longer a member, but then again, mostly neither am I :)
So this game will be a vue-based web front end with a REST API backend, which is similar to the original (although vue will be an upgrade from what it was as well). What the perl backend would do, then, is, with each request, nginx would proxy it to one of two servers, and both servers were running the same code (horizontal scaling!) which would have a process pool to pick up the request and run with it. But each process would make calls to the db, both for reading and writing, as well as to memcache, etc. Which would mean the process would sit there idle for large stretches of time. This isn't hugely horrible, the kernel will see it's sleeping and move on. But processes are significantly more overhead than threads for the kernel to track and manage (though less significant on linux than windows), and both are significantly more overhead than coroutines at this. So a single thread will be able to send multiple requests to various sources (postgres, redis, etc.) and will only need actual threads for computation, which there likely won't be that much of most of the time (the main resource-processing loop might be an exception, but even that likely isn't much). Handling dozens of simultaneous requests on a single thread should be possible with proper coroutine support.
There is also the idea of adding websockets into the mix, and that, too, should be doable with few, if any, additional threads, as most of the time those coroutines will be dormant.
As to checkpointing, that's a bit further than I usually go with coroutines, but I might have to go there with websockets to shut them down cleanly. However, even then, there is a way to tell the server to save all the state and exit, more than one way, really, and in C#, that way is a CancellationToken. Once the cancellation is received, do what you need to cancel things, which could be to save things, though usually it's simply to throw an exception to back out of the stack depth. Mostly this isn't an issue because everything happens in the database, which is a requirement for horizontal scaling anyway.
My theory is that the workload that was being done via perl on two VM servers could be handled entirely trivially on a single, smaller VM with coroutines. My day job involves more or less a similar scenario, with insufficiently optimised computation in many places, with likely an order of magnitude more simultaneous users than I could dream of for my project, which leads me to conclude that there was something wrong with the original setup, and I believe (could be wrong) that this is the cause.
The new model will likely also involve nginx as a proxy to a single backend server (so we can eventually horizontally scale, but I highly doubt it'll ever be necessary) but also serve the vue code and static assets directly itself. Both of these will now live on the same server. Redis and a scheduler/discord bot will also live on that server (though not listening on any public ports). Horizontal scaling will be a bit of a challenge to maintain security, but should be fine. Postgres will live on a second server. This is compared to the original system using 5 servers.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: CB stats, Last hour of CB
by NERDVANA (Priest) on Nov 03, 2023 at 20:21 UTC | |
by Tanktalus (Canon) on Nov 14, 2023 at 15:07 UTC |