in reply to Perlmonks Response Time Bottoming Out

You personally can try to deactivate your nodlets in Nodelet Settings, this could reduce some overhead to achieve a response before timeout. (credits to kcott)

On server side:

Apart from the ideas with reverse proxies dealing with anonymous clients... (Which would take longer to implement).

We could block all anonymous access on the servers every other hour as "maintenance downtime" till the problem is fixed. ¹

As an extended idea

In these windows individual monks could test private proxies and probably share the settings.

Personally I have left a trove of content here over the years. I kind of need a backup.

Saying so I realize I could try to poll the way back machine. Hmm 🤔

Cheers Rolf
(addicted to the Perl Programming Language :)
see Wikisyntax for the Monastery

Update
¹) just checked that it's possible to have rewrite rules in Apache based on hour and cookie content. Didn't try to combine both tho, but I bet that's possible too.
  • Comment on Re: Perlmonks Response Time Bottoming Out

Replies are listed 'Best First'.
Re^2: Perlmonks Response Time Bottoming Out
by syphilis (Archbishop) on Sep 24, 2025 at 02:21 UTC
    We could block all anonymous access on the servers every other hour as "maintenance downtime" till the problem is fixed.

    Nothing against anonymity, but if blocking anonymous access will work, then that should be done immediately.
    (Just do it as a permanent arrangement ... if that's what it takes.)

    Cheers,
    Rob

      While I'm certain that the suggestions here are certainly well-meant, I think they are unlikely to have any impact. Many of the Anonymous requests are already served from static pages. Serving an "access denied" page instead of the static page will have very little impact.

      I think the next step will be to separate the machine where logged-in users access the site from the machine where the crawlers roam. But that requires some setup and thinking, and I won't do that until I have reserved enough time to implement this and also deal with the fallout.

      One downside of having the site accessible at all is that people have again a venue to complain about the site responsiveness.

        > One downside of having the site accessible at all is that people have again a venue to complain about the site responsiveness.

        If our site becomes inaccessible, one of the upsides will be there won't be any place to complain about it ;-)

        map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]
        > One downside of having the site accessible at all is that people have again a venue to complain about the site responsiveness.

        I'm not complaining, please see it as a brainstorm of options to pick from.

        > I think they are unlikely to have any impact.

        I'm ignorant about the bot's heuristics. Maybe they act differently on an access denied? ¹

        Some other suggestions you are free to consider:

        I started the chrome-console and looked into the web-response of one static page (ID:100001).

        They have a cache-control max-age=0 indicating that the client shouldn't see them as static.³

        So a bot might conclude it may be worth polling it again and again in short intervalls.

        Various other embedded resources are also loaded with delay, and sometimes those take even longer time to load than the above initiating page

        It seems like they are dynamically pulled from a node for each cached page, even if their content is super static (JS and CSS) and they have no cache control setting.²

        Saying so, I'm quite inexperienced in server settings, cache control and bot behavior , so my observations may be irrelevant and my timings may be limited by the fact that I use a mobile web access.

        Please ignore me if it is so ...

        Cheers Rolf
        (addicted to the Perl Programming Language :)
        see Wikisyntax for the Monastery

        updates

        ¹) better even a HTTP-503 Service Not Available with a retry-after set to 60 min

        ²) When being logged in, my browser is indeed caching those embedded files, not sure whats going wrong here.

        ³) Apparently is at least the google-bot honoring cache settings. Probably delivering the static/cached files with cache-control headers might eventually help reducing load?

      > (Just do it as a permanent arrangement ... if that's what it takes.)

      I don't think becoming invisible is a viable option.

      It's already the case that most search engines except google don't index us anymore.

      Cheers Rolf
      (addicted to the Perl Programming Language :)
      see Wikisyntax for the Monastery