locked_user beautyfulman has asked for the wisdom of the Perl Monks concerning the following question:
Replies are listed 'Best First'. | |
---|---|
Re: PSGI/Plack unsatisfactory performance
by Your Mother (Archbishop) on Dec 07, 2021 at 05:35 UTC | |
I’m one of the ones, probably the one you’re calling out, who has suggested uwsgi repeatedly. Stability is not free but it’s the only professional answer. Starman was close to useless for me in the *multiple* times I fought to use it; I prefer Perl solutions. I think I even filed a bug report on core dumps related to some bad hardcoded port stuff once and never got an answer. Never heard of, nor tried, the other options but they also appear unstable. unit from the nginx people might be worth trying, I still haven’t. “Hello world” tests are essentially useless in real world terms, unless your app has no templates, no DB/Model layers, and no processing of any kind. Then it works, but at that point, it’s a static app and every raw webserver will beat it easily. I am skeptical that even with mod_php, or whatever is the standard now, you are seeing better than 80x improved performance. It’s probably some sort of webserver caching since the page/request never changes. mod_perl, as suggested already, will be faster but it’s a mistake, and a dead end likely to get pulled out from under you in the future, in my view. | [reply] [d/l] [select] |
by locked_user beautyfulman (Sexton) on Dec 25, 2021 at 23:22 UTC | |
by locked_user beautyfulman (Sexton) on Dec 26, 2021 at 22:10 UTC | |
by Your Mother (Archbishop) on Dec 27, 2021 at 00:38 UTC | |
That’s fantastic. If you have the time and patience, I encourage you to write-up your approach in as much detail as possible to post here. Deployment stuff is possibly the hardest part—outside security—of getting web apps right and it sounds like you’re hitting on winning combinations. | [reply] |
by locked_user beautyfulman (Sexton) on Dec 27, 2021 at 02:03 UTC | |
by Your Mother (Archbishop) on Dec 28, 2021 at 20:29 UTC | |
Re: PSGI/Plack unsatisfactory performance
by NERDVANA (Priest) on Dec 07, 2021 at 22:19 UTC | |
If you get *any* connections dropped, something has gone wrong. You have 100 concurrent requests, so any server with a listen() backlock of at least 100 should serve every request without dropping any. I suspect the "something wrong" is that you ran with the default "max requests", which for Starman is 1000. This means after 1000 pages served, it will kill and start a new worker. While the worker is restarting, perhaps that loses a connection? Jmeter looks like an unpleasant pile of Java and GUI with a very long manual, so I'll make some examples with 'ab' instead. My laptop is a Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz, 4 core / 8 thread.
I'll try Gazelle first:
So, mine is running 4x faster with no dropped requests. On a laptop. Now Starman:
Now Feersum:
I think all of these are performing professionally for me, when you consider that the request overhead is extremely small vs. the time for a database request. All of these are intended to be combined with a front-end like apache or nginx, which is what you would use to serve static content. In fact, most of them warn you they *need* to be combined with a frontend to get safe HTTP sanity checking. If perl is only used for the dynamic content, the performance overhead of the app server is even less important, because the database will dominate. Update: | [reply] [d/l] [select] |
by locked_user beautyfulman (Sexton) on Dec 09, 2021 at 02:54 UTC | |
by NERDVANA (Priest) on Dec 09, 2021 at 18:23 UTC | |
I feel like there is something going wrong with the "accept" loops in Gazelle. I checked on the implementation, and it looks very much like it forks, and then each worker calls accept() on the same listening socket, and then they *should* be able to receive new connections in parallel. Yet, on the slower server, the pool of workers were unable to beat the performance of a single worker. I'm aware of the "stampede" effect where a listen socket becoming readable wakes all the workers instead of just one, but with Gazelle's loop implemented in C, that should still be low enough overhead that they should be able to run in parallel even for a tiny request. I'd be interested to see it if you or anyone else decide to chase this down to some microsecond-level traces. It's a lot of effort though, to shave off a mere 3-5ms per request. It wouldn't make any difference to any of the apps I maintain. | [reply] |
by locked_user beautyfulman (Sexton) on Dec 18, 2021 at 21:48 UTC | |
Re: PSGI/Plack unsatisfactory performance
by trwww (Priest) on Dec 07, 2021 at 01:09 UTC | |
mod_perl It makes me sad people aren't using it. I just did a hello world test on my system, a bare apache with mod_php 7, and then that same apache with mod_perl loaded instead. I got 1,500 requests per second more with mod_perl The following was performed on a relatively low traffic Intel E3-1230 v6 with 64gb of RAM mod_php for a baseline: config:
script:
test:
apache bench:
now mod_perl: config:
script:
test:
apache bench:
| [reply] [d/l] [select] |
by vincent_veyron (Beadle) on Dec 09, 2021 at 00:47 UTC | |
mod_perl It makes me sad people aren't using it. Hear! Hear! I reproduced your script on my lowly kimsufi and online servers; I'm getting awful numbers compared to yours, do you have any idea why that is? something in the hardware maybe?
https://compta.libremen.com | [reply] [d/l] |
by locked_user beautyfulman (Sexton) on Dec 07, 2021 at 12:28 UTC | |
Re: PSGI/Plack unsatisfactory performance
by Anonymous Monk on Dec 07, 2021 at 08:41 UTC | |
| [reply] |