Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Seeking community wisdom on why the ApacheBench utility consistently returns a lot of "Non-2xx responses" ( 502 Bad Gateway ) when running a benchmark test of my helloworld web app using Perl's Net::Async::FastCGI and Nginx as a reverse proxy. The number of concurrent requests is pretty low at 50, but it still return lots of "Non-2xx responses"?? Any insight is greatly appreciated. Please see below for the test code and the ApacheBench command for benchmark test:

1. ApacheBench command:
Send 100 requests at concurrency level of 50:
==============================================================================
ab -l -v 2 -n 100 -c 50 "http://localhost:9510/helloworld/"<br/>
which returns:
... Concurrency Level: 50 Time taken for tests: 0.015 seconds Complete requests: 100 Failed requests: 0 Non-2xx responses: 85 #NOTE: all of these are "502 Bad Gateway" ...
2. Nginx config:
==============================================================================
location /helloworld/ { proxy_buffering off ; gzip off ; fastcgi_pass unix:/testFolder/myPath/myUDS.sock ; include fastcgi_params ; }
3. helloworld test script:
==============================================================================

use strict ; use warnings ; use IO::Async::Loop ; use Net::Async::FastCGI ; # This script will respond to HTTP requests with a simple "Hello, Worl +d!" message. #If using TCP port for communication: #my $PORT = 9890 ; #If using Unix domain socket for communication: my $uds = 'myUDS.sock' ; # Create an event loop my $loop = IO::Async::Loop->new() ; # Define the FastCGI request handler subroutine sub on_request {#Parms: request(Net::Async::FastCGI::Request); #Return +: void; my ( $fcgi, $req ) = @_ ; # Prepare the HTTP response my $response = "Hello, World!\n" ; my $respLen = length( $response ) ; # Print HTTP response headers $req->print_stdout( "Status: 200 OK" . "\n" . "Content-type: text/plain" . "\n" . "Content-length: " . $respLen . "\n" . "\n" . $response ) ; # Finish the request $req->finish() ; }#end sub # Create a new FastCGI server instance my $fcgi = Net::Async::FastCGI->new( #handle => \*STDIN , # Read FastCGI requests from STDIN on_request => \&on_request , # Assign the request handler subroutin +e ) ; # Add the FastCGI server instance to the event loop $loop->add($fcgi) ; $fcgi->listen( #service => $PORT , #if using TCP portnum addr => {# if using Unix domain socket family => "unix" , socktype => "stream" , path => "$uds" , } , #host => '127.0.0.1' , on_resolve_error => sub { print "Cannot resolve - $_[-1]\n" } , on_listen_error => sub { print "Cannot listen - $_[-1]\n" } , ) ; $SIG{ HUP } = sub { system( "rm -f $uds" ) ; exit ; } ; $SIG{ TERM } = sub { system( "rm -f $uds" ) ; exit ; } ; $SIG{ INT } = sub { system( "rm -f $uds" ) ; exit ; } ; # Run the event loop $loop->run() ;

  • Comment on Benchmarking Perl's Net::Async::FastCGI app consistently return LOTS of "Non-2xx responses"
  • Select or Download Code

Replies are listed 'Best First'.
Re: Benchmarking Perl's Net::Async::FastCGI app consistently return LOTS of "Non-2xx responses"
by ysth (Canon) on Dec 04, 2025 at 20:51 UTC
    Does reducing the concurrency help? Does using a port instead of a socket make a difference?
      I already tried reducing both the number of total requests to 40, and concurrent requests to 20; and it still returns 24 "Bad Gateway"s. Since a concurrent requests level of 20 is really child-play, there's something very funky going on here. Using TCP port number instead of Unix socket makes it even worse. I literally use the helloworld example verbatum (except for the Unix socket tweak) from the Net::Async::FastCGI examples on metacpan.org. So kinda of run out of idea here...

        Maybe try reducing the concurrency to 1 and 2, just to see if you are at all able to have more than one request in flight at the same time?

      Could someone try running the same thing and see if you experiencing the same issue? All u need is Perl 5.14 or above, install the module Net::Async::FastCGI and the Nginx web server, then configure Nginx with the little snippet to hook it up to the helloworld FastCGI script, then run the script on one terminal, and start Nginx on another terminal.
Re: Benchmarking Perl's Net::Async::FastCGI app consistently return LOTS of "Non-2xx responses"
by santa100 (Initiate) on Dec 05, 2025 at 18:58 UTC
    The default socket queue size is set at very low number. Set it to a higher number and the issue should be successfully resolved:
    $fcgi->listen( ... queuesize => 100 , ... ) ;
Re: Benchmarking Perl's Net::Async::FastCGI app consistently return LOTS of "Non-2xx responses"
by 1nickt (Canon) on Dec 05, 2025 at 13:08 UTC

    Edit: I found the answer to the problem described below was to use 127.0.0.1 rather than localhost (on MacOS).

    I got the same results:

    Concurrency Level: 50 Time taken for tests: 0.040 seconds Complete requests: 100 Failed requests: 0 Non-2xx responses: 57 Total transferred: 44998 bytes HTML transferred: 28931 bytes Requests per second: 2507.33 [#/sec] (mean)
    The Non-2xx responses were 502s.

    I tried fiddling with the proxy settings, but it made no difference.

    Hope this helps.

    Original comment:

    More instructions needed. The fastcgi script is serving the /helloworld content when I use curl -L (it's returning a 301), but when I use Apache Bench I get apr_socket_connect(): Invalid argument (22). There's also no file at /tmp/myUDS.sock (which is where I configured it in nginx conf and in the Async script).


    The way forward always starts with a minimal test.
      try to include the front slash at the end also, ex: http://localhost:<port>/helloworld/ . As for the location of the unix sock file, maybe try to save it at your home folder instead of /tmp to guarantee it can write to such location ??

        This turned out to be a MacOS vagary: I had to use 127.0.0.1 instead of localhost for AB to work.


        The way forward always starts with a minimal test.