It is with some regret that I've come to realise that my return to Perl is unlikely, let alone any time soon. Since I've not really been an active participant in this community in years, I think it's prudent to accept the truth and pass on the torch.

I will be shutting down the CB stats service I've been running for probably over a decade now by the end of the year. That service also provides last hour of cb, and so that will also disappear.

While I'm more than willing to pass on the codebase I have to any who ask for it (my nick at gmail.com), please be forewarned: it's not the cleanest code, and it was used to learn both Coro and DB2 (the latter of which for whom I worked at the time), and you'll probably want to re-seat it against a different database, the queries will be significantly less performant (sorry, but the joins are things that DB2 objectively does better than MySQL). I've gone on to do a lot of work against various databases, and what I learned writing this helped immensely, so I have zero regrets about doing it.


Why am I no longer doing Perl? Well, it's simple. After working from home for 13 years, I was told to return to Toronto. That wasn't going to happen, so I had to find another job. Took me 18 months, during which time I managed to hold on to the work-from-home position. What I found was something doing C# on Windows. That job was, let's just say, sub-optimal, and a bit over a year later I found another local job still with C#, leading a small team. I've been here for over four years at this point. By January, it'll have been about 5 years since I did any significant amounts of Perl.

Also, while working from home, I also helped out with an online game written in Perl, which got shut down shortly before I left the work-from-home position. I've come to realise that part of its cost in running came down to Perl - specifically, its lack of threading and co-routine support. (Yes, Coro works, but it's a serious hack to the point where I'm not confident in its future, but also its ability to support all the drivers and such that would be required for this.) I've wanted to re-start this game, but only recently embarked on this, and decided a better backend for this game would be .NET whose native support for threads and co-routines (async/await) is significantly stronger, but also because I'm already using that in my day job, so it becomes advantageous to stay ahead of the team that reports to me on technical matters this way. This serves to drop my chances at returning to Perl in any significant manner even further.


I cannot guarantee a timely return to view any responses here. You're better off with email if you feel the need to reach me. While I, personally, don't see any harm in multiple monks having their own CB scrapers and CB stats generators, you will definitely need the gods approval to take over the last hour of cb, as they will have to assign ownership to someone.

Replies are listed 'Best First'.
Re: CB stats, Last hour of CB
by cavac (Prior) on Nov 19, 2023 at 09:38 UTC

    After talking to Tanktalus and gods, a new implementation of last hour of cb is in the works. This will, in the future, also provide more features. CB Stats will also live on, although it is a completely new implementation and stats calculation might be a little different.

    I will post a separate Meditation with more details when it is all working to my satisfaction, but here is the preview on "last hours" on my website:
    https://cav.ac/guest/chatterbox/lasthours

    The new CB stats isn't finished yet (and has many bugs), but you can see what i currently have at:
    https://cav.ac/guest/chatterbox/statistics

    One of the new features in the works is snapshots. You will be able to just type "!snapshot" in chatterbox (or your favourite DIY chat client), and chatterbot will make you a static copy of "last hours" and send you a private message with a link to it. That way, you can more easily preserve useful discussions and links for future use. If you have an XPD account (which is linked to you PM account), you will also have, you will also have an archive to all your snapshots.

    As i said, more details in a future Meditation, this was just a quick post to update you that "Yes, last hour of CB and CB Stats have a future".

    PerlMonks XP is useless? Not anymore: XPD - Do more with your PerlMonks XP

      ++

      Thank you, cavac, for taking this on so that we can have a clean (more or less) transition. When I took these over, the previous iterations had been dead for some time, so the transition had a long period of not-working-at-all. This is a better plan :)

      I'm still transitioning stuff from my old PC to my new PC, this will not move at all, though. But neither will the old machine be decommissioned, though it will bounce a few times yet as I physically move it around. So there's time, though it will be a bit flaky (more than normal) for a bit :)

        It all starts to come together. Ye old chatterbot now has a picture on his profile.

        The upload code works mostly, except for a couple of missing timezone conversions and missing information lines. chatterbot dev page

        If everything runs stable for a couple of days, i'll tell my live system (i have two installations of that stuff) to start trying to upload to the official "last hours" page, but will get "permission denied" errors. Then i'll ask the gods to take over the official "last hour" page. Basically, what will happen is that you keep running your upload code until you start getting "permission denied" errors, while my code suddenly gets the permission to upload it's updates. Should be a completely smooth transition from the users point of view.

        (Yes, i know, that kind of "treat it as mission critical code" isn't strictly required. But hey, i don't get many opportunities to plan and execute a controlled takeover/hot swap of an existing system. Treating it as "mission critical" gives me a nice chance to check my strategies and learn from my mistakes.)

        The new "CB Stats" page is a bit more complicated, it's not as important as the chatlog itself. If that doesn't provide all the features right away, it's not a big deal.

        PerlMonks XP is useless? Not anymore: XPD - Do more with your PerlMonks XP
Re: CB stats, Last hour of CB
by NERDVANA (Priest) on Nov 03, 2023 at 09:16 UTC
    I've wanted to re-start this game, but only recently embarked on this, and decided a better backend for this game would be .NET whose native support for threads and co-routines (async/await) is significantly stronger,

    I don't use the CB that much, so I don't have a lot to say there, but I'm very interested in the topic of threading :-)

    What sort of workload does this game run that would benefit from C# style threading? I ask because I also have an interest in online game servers, and from my experiences with Java, decided that I wouldn't want to do it that way; locking access to the different data structures can get very complex, adds a lot of cognitive overhead, and is generally hard to debug. Meanwhile, if the threads are acting on behalf of stateful user connections, they don't scale well. If you need threads because there are too many user connections for one thread to handle, I would rather have multiple worker processes that load-balance the connections and exchange their data through a shared state in a database.

    But, maybe threading was a secondary goal and your main desire is the async/await coroutines? If your game (like the one I was tinkering with) is a MUD, you might find in the end that async/await isn't that great of a fit. Async/Await makes it easy to write scripted events as a natural tree of function calls, each of which can divert to wait for other events to complete, but this structure doesn't "checkpoint" well. There is no way to tell the server to save all that state and exit so that you can upgrade the code and pick up where it left off. If you instead design the scripted workload as a list of state machines, you can save that off to a table and then pick it up in a new process (maybe after patching some bugs or something) and the state machines can carry on right where they left off. You can even save off the state machines if all the players exit the area where it would have visible effects, and load them back up when players re-enter the area. It also lets you pass the workload for an area between threads/processes so that events occurring in a local area can keep the state local and not need as many database accesses.

    If your game server was for a 3D shooter that needs to validate lots of collision checks on a shared 3D collection of polygons, then I totally see the C# advantage. ...but mainly for the shared data structures and speed. I would still choose to write that kind of a server using state machines :-)

      There are a number of considerations here. First, to your point of debugging, yes, multithreading is generally much harder to debug. I had a conversation with a monk many years ago on this topic where he was advocating perl threads for something and using a shared variable to send back all the information, and I smelled some male-bovine manure, so I tested it out, and showed that it was, in fact, problematic. He doubled down, and eventually I gave up having proven that multi-threading is harder than it appears. As I recall, he's no longer a member, but then again, mostly neither am I :)

      So this game will be a vue-based web front end with a REST API backend, which is similar to the original (although vue will be an upgrade from what it was as well). What the perl backend would do, then, is, with each request, nginx would proxy it to one of two servers, and both servers were running the same code (horizontal scaling!) which would have a process pool to pick up the request and run with it. But each process would make calls to the db, both for reading and writing, as well as to memcache, etc. Which would mean the process would sit there idle for large stretches of time. This isn't hugely horrible, the kernel will see it's sleeping and move on. But processes are significantly more overhead than threads for the kernel to track and manage (though less significant on linux than windows), and both are significantly more overhead than coroutines at this. So a single thread will be able to send multiple requests to various sources (postgres, redis, etc.) and will only need actual threads for computation, which there likely won't be that much of most of the time (the main resource-processing loop might be an exception, but even that likely isn't much). Handling dozens of simultaneous requests on a single thread should be possible with proper coroutine support.

      There is also the idea of adding websockets into the mix, and that, too, should be doable with few, if any, additional threads, as most of the time those coroutines will be dormant.

      As to checkpointing, that's a bit further than I usually go with coroutines, but I might have to go there with websockets to shut them down cleanly. However, even then, there is a way to tell the server to save all the state and exit, more than one way, really, and in C#, that way is a CancellationToken. Once the cancellation is received, do what you need to cancel things, which could be to save things, though usually it's simply to throw an exception to back out of the stack depth. Mostly this isn't an issue because everything happens in the database, which is a requirement for horizontal scaling anyway.

      My theory is that the workload that was being done via perl on two VM servers could be handled entirely trivially on a single, smaller VM with coroutines. My day job involves more or less a similar scenario, with insufficiently optimised computation in many places, with likely an order of magnitude more simultaneous users than I could dream of for my project, which leads me to conclude that there was something wrong with the original setup, and I believe (could be wrong) that this is the cause.

      The new model will likely also involve nginx as a proxy to a single backend server (so we can eventually horizontally scale, but I highly doubt it'll ever be necessary) but also serve the vue code and static assets directly itself. Both of these will now live on the same server. Redis and a scheduler/discord bot will also live on that server (though not listening on any public ports). Horizontal scaling will be a bit of a challenge to maintain security, but should be fine. Postgres will live on a second server. This is compared to the original system using 5 servers.

        Ok, so it's basically the standard web-worker model. If you haven't yet, I would suggest looking at Mojolicious. It has an amazingly convenient system for websockets, and is built around an event loop. When paired with Mojo::Pg, you can implement an entire web worker with non-blocking calls very conveniently. It's not quite as convenient as async/await keywords, but the way it works the callbacks into "events of objects" is almost as nice.

        I did a review of all the async solutions for websockets a few years ago in my YAPC talk "The Wide World of Websockets". I wasn't using a database for any of those, but implemented a simple chat server in each perl library and Mojo seemed like the clear winner. Meanwhile, the multiplayer Asteroids demo is still live on https://nrdvana.net/asteroids. (it's only half-implemented and a little buggy, but shows the possibilities pretty well. Click 'reconnect' however many times it takes...)