Beefy Boxes and Bandwidth Generously Provided by pair Networks
P is for Practical
 
PerlMonks  

upstream prematurely closed connection while reading response header from upstream

by Digioso (Sexton)
on Feb 24, 2023 at 13:59 UTC ( [id://11150577]=perlquestion: print w/replies, xml ) Need Help??

Digioso has asked for the wisdom of the Perl Monks concerning the following question:

Hi, I've setup a server and installed HestiaCP as a server management tool to install & control NGINX, Apache, Databases, .... It's an Ubuntu 22.04 server Unfortunately HestiaCP doesn't support Perl, so I installed the package for Mod_Perl myself and changed the Apache configuration accordingly. Now I started migrating stuff from my old webhoster to the new server. And... now I'm facing lots of problems. The problems seem to only occur when I use self-written modules. Using pre-existing modules doesn't cause any problems. The scripts and modules were running perfectly fine on my old webhoster. I just needed to change a few paths in the scripts/modules to refelct the new locations. So I am pretty sure that I do not have a syntax error or something like that. Around 50% of the time I can see in the web server log messages like these:

2023/02/24 14:36:45 [error] 1467#1467: *1180 upstream prematurely clos +ed connection while reading response header from upstream, client: 87 +.122.231.239, server: digioso.tk, request: "GET /test.pl HTTP/2.0", u +pstream: "https://10.0.0.113:8443/test.pl", host: "digioso.tk"

So... 50% of the time the page loads, 50% it doesn't. The upstream errors point to NGINX (at least all the search results I found on Google point there), but I couldn't find anything even remotely connected to my issue.

Links to test:
https://digioso.tk/test.pl <- I am using a self-writte module here and the issue occurs like 50% of the time. The website runs into error 500 and I can see the above upstream message in my webserver logfile. If you don't get error 500 immediately - please refresh the page a couple of times.
https://digioso.tk/test2.pl <- No self-written modules. Runs perfectly fine.

Source code:
test.pl
#!/usr/bin/perl -w use strict; use CGI; use CGI::Carp qw(fatalsToBrowser warningsToBrowser); use lib "/home/digioso/web/digioso.tk/stuff"; use Navi; Navi::print_navi(); print "test"; Navi::end_navi();
test2.pl
#!/usr/bin/perl -w use strict; use CGI; use CGI::Carp qw(fatalsToBrowser warningsToBrowser); my $cgi = new CGI; binmode STDOUT, ":utf8"; print $cgi->header (-type => 'text/html', -charset => 'UTF-8'); print "test2"; print $cgi->end_html;
Navi.pm (located in /home/digioso/web/digioso.tk/stuff):
#!/usr/bin/perl -w use strict; use warnings; use CGI; use CGI::Carp qw(fatalsToBrowser warningsToBrowser); package Navi; my $cgi = new CGI; binmode STDOUT, ":utf8"; sub print_navi { print $cgi->header (-type => 'text/html', -charset => 'UTF-8'); } sub end_navi() { print $cgi->end_html; }

So basically test1 outputs the same as test2. Only difference is that starting and ending HTML is done via a module. The original Navi.pm contains many more things (eg including CSS and so on), but for demonstration purposes I thinned it down to the above.

Replies are listed 'Best First'.
Re: upstream prematurely closed connection while reading response header from upstream
by Corion (Patriarch) on Feb 24, 2023 at 14:06 UTC

    My guess is that your "global" variables in package Navi are not always initialized, maybe due to some mod_perl weirdness resp. how it wraps your code in a subroutine. You should see some warning like Variable will not stay shared in the webserver logs maybe.

    My first approach would be to move the creation of the CGI object into a subroutine and always call that subroutine instead of hoping that the global variable is still available.

      Thanks a lot for this. And it seems that is indeed the solution. The scripts have been running for years without any issues, though. So I find it quite astonishing that this really is the case. oO I uploaded a test3.pl where I tried your suggestion and this one works without any issues. But urgh... That would mean I have to adjust that in dozens of scripts... My whole site is running on Perl... :(


      Link: https://digioso.tk/test3.pl

      Source code test3.pl
      #!/usr/bin/perl -w use strict; use CGI; use CGI::Carp qw(fatalsToBrowser warningsToBrowser); use lib "/home/digioso/web/digioso.tk/stuff"; use Navi2; my $cgi = Navi2::create_cgi(); Navi2::print_navi($cgi); print "test"; Navi2::end_navi($cgi);

      Source code Navi2.pm:
      #!/usr/bin/perl -w use strict; use warnings; use CGI; use CGI::Carp qw(fatalsToBrowser warningsToBrowser); package Navi2; binmode STDOUT, ":utf8"; sub create_cgi { return new CGI; } sub print_navi($) { my $cgi = shift; print $cgi->header (-type => 'text/html', -charset => 'UTF-8'); } sub end_navi($) { my $cgi = shift; print $cgi->end_html; }
        Now I started migrating stuff from my old webhoster to the new server

        This is why things changed, and why you have to edit your scripts.

        If you are interested, you could use this to migrate to (say) Mojolicious, which allows you to locally test your scripts and move some of your HTML generation into templates etc., but as a first step, instead of passing $cgi into print_navi(), you could create it within print_navi() if you don't need it elsewhere, especially if you're only using it for HTML generation.

        You also want to move the binmode STDOUT, ":utf8"; into your first call, likely create_cgi() so it gets called every time.

Re: upstream prematurely closed connection while reading response header from upstream
by cavac (Parson) on Feb 27, 2023 at 17:34 UTC

    While your question has already been answered, you should also check the timeout settings of the old vs new webserver, especially if you have some long running scripts.

    Also, if you're getting a lot of webcalls (you are implementing some API or something), i would check the new system for any anti-flooding/anti-ddos settings/daemons. While having these is generally a good idea, it's quite annoying when you are doing some API tests and the server suddenly decides to IPTable your home IP for a day. I can neither confirm nor deny that tools like this are a two-edged sword and may or may not have bugs that prevent the tool from removing entries from the block list after the expiry time.

    PerlMonks XP is useless? Not anymore: XPD - Do more with your PerlMonks XP

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://11150577]
Approved by marto
Front-paged by marto
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others perusing the Monastery: (3)
As of 2024-04-26 07:02 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found