Re: Strange blocking issue with HTTP::Daemon
by Khen1950fx (Canon) on Aug 10, 2010 at 22:58 UTC
|
| [reply] |
|
|
Can someone explain to me what is happening here? Come on, monks..
Why is a vanilla HTTP::Daemon behaving like that, and what must be done to solve it?
A bit unsatisfactory to switch to Net::Server. Especially as the ::HTTP personality is so far less usable than HTTP::Daemon's...
And: Doesn't the ancient Net::Server::NonBlocking effectively do the same as the ::Fork'ing ::PreForkin'ing etc. personas? As said, I've tried a script around Net::Server::etc. and it showed the same behaviour (if I remember correctly). I might port my test-server.pl to Net::Server::MultiType, but I would prefer understanding why I invest this effort.
| [reply] |
|
|
Replying to an ancient post but this is little to do with HTTP::Daemon. The issue is that once you have accepted a connection you loop processing its requests until it goes away. With the LWP::UserAgent it appears to close the connection after each request but a web browser will attempt to keep the connection open meaning your code will block on the get_request.
The example code in the HTTP:Daemon synopsis is the problem. The code below is how I use HTTP::Daemon. Instead of blocking in the get_request call I put the daemon and client connections in an IO::Select object and block on that and only process data from those that are ready. That way a single threaded server can appear to be processing data concurrently from multiple connections including regular browsers that will keep the connection open.
my $d = HTTP::Daemon->new(
LocalAddr => 'localhost',
LocalPort => 4242,
ReuseAddr => 1) || die;
my $select = IO::Select->new();
$select->add($d);
while ($select->count()) {
my @ready = $select->can_read(); # Blocking
foreach my $connection (@ready) {
if ($connection == $d) {
# on the daemon so accept and add the connection
my $client = $connection->accept();
$select->add($client);
}
else {
# is a client connection
my $request = $connection->get_request();
if ($request) {
# process the request
}
else {
# connection closed by the client
$select->remove($connection);
$connection->close(); # probably not necessary
}
} # end processing connections with data
}
Now back to investigating the problem I was originally looking at. Why does get_request block on malformed / incomplete requests....... | [reply] [d/l] |
|
|
Re: Strange blocking issue with HTTP::Daemon
by isync (Hermit) on Aug 11, 2010 at 12:30 UTC
|
Another post as I'd like to add a request for comments about what I've learned:
Result 1:
So HTTP::Daemon in essence behaves correctly, keeping connections alive on HTTP/1.1 requests, with the effect that this might block another client trying to connect.
This is a problem as long as HTTP::Daemon is single-threaded/non-forking, right? With a spawning HTTP::Daemon connection wrapper, although one connection might be doing keep-alive, there would be others idling waiting for connections, right?
Or do all the threads/forks still share a single socket, which is then blocked?
As said, I think I've seen this "blocking" behavior with forking Net::Server and assertively non-blocking AnyEvent::HTTPD based scripts - well, I think..
But forks sharing a single socket would explain this. But I again admit my limited understanding of socket workings.
Result 2:
Assuming that forks do not share a single (blocked) socket, a forking server might be able to serve keep-alive connections and closing ones side by side.
Adapting a ForkOnAccept concept from here, this is my result:
#!/usr/bin/perl
use HTTP::Daemon;
use Data::Dumper;
my $d = HTTP::Daemon->new(
LocalAddr => 'localhost',
LocalPort => 4242,
ReuseAddr => 1
) || die;
my $cnt;
# response loop
while (my $c = $d->accept) {
my $pid = fork();
# We are going to close the new connection on one of two condition
+s
# 1. The fork failed ($pid is undefined)
# 2. We are the parent ($pid != 0)
if(!defined $pid || $pid == 0 ) {
$c->close;
print "Needs close: $pid\n";
next;
}
# From this point on, we are the child.
# $c->close; # Close the listening socket (always done in childre
+n)
# Handle requests as they come in
while (my $request = $c->get_request) {
print "Request:\n".Dumper($request);
my $response = HTTP::Response->new( 200, 'OK');
$response->header('Content-Type' => 'text/html'),
$response->content("$cnt Working! (pid $pid)");
$cnt++;
# print "Response:\n".Dumper($response);
$c->send_response($response);
}
$c->close;
undef($c);
}
and guess what, it does work. (And on keep-alive connections the pid remains the same). Mh, although I get a lot of "Needs close: 0" messages from non-keep-alive clients. I think I need to think through fork()'ing again...
What's wrong/anything wrong with this design? | [reply] [d/l] |
|
|
| [reply] |
|
|
"Make it work, then make it fast!"
Benchmark-wise this simple solution is slow due to the overhead of fork(). Now I see why people came up with schemes where a server forks on stock, so at any given time a ready-made fork is idling for a request. This forking-solution here gets creamed by the former/problematic non-forking one in ab...
| [reply] |
|
|
|
|
|
Re: Strange blocking issue with HTTP::Daemon
by isync (Hermit) on Aug 11, 2010 at 09:59 UTC
|
It this so obvious or why won't anyone explain? | [reply] |
|
|
| [reply] |
|
|
$c->send_response($response);
$c->force_last_request;
or force HTTP version less than 1.1, then the server won't try to use HTTP persistent connection.
It appears the OS is allowing perl and firefox to share a socket to localhost:4242, and then firefox is blocking access until you exit. I'm not sure if the OS is doing the right thing here.
| [reply] [d/l] |
Re: Strange blocking issue with HTTP::Daemon
by isync (Hermit) on Aug 11, 2010 at 11:52 UTC
|
Anonymous Monk is recurring refering to the bugs filed for libwww-perl in relation to blocking/nonblocking sockets as the underlying logic for HTTP::Daemon, right?
So the dirty solution is brute forcing a "Connection: close" (by using $c->force_last_request;) after each response, effectively what HTTP/1.0 does and HTTP/1.1 should supersede, right?
A pity, as HTTP::Daemon has elaborate handling of HTTP/0.9, HTTP/1.0 and HTTP/1.1.
To aid investigation in the "what the OS does" corner, I'd like to add that I've seen this behavior on Linux/libwww-perl and Win32/libwww-perl clients. The common denominator really seems to be libwww-perl, which seems to run into these dead-locks after a number of successful requests or right away. | [reply] [d/l] |
|
|
Anonymous Monk is recurring to the bugs filed for libwww-perl in relation to blocking/nonblocking sockets as the underlying logic for HTTP::Daemon, right?
Why don't you ask him? Oh wait, he won't be recurring your understanding.... the advice is for you to make an official bug report
So the dirty solution is brute forcing a "Connection:close" (by using $c->force_last_request;) after each response, effectively what HTTP/1.0 does and HTTP/1.1 should supersede, right?
Nothing dirty about it, its a working solution.
To aid investigation in the "what the OS does" corner, I'd like to add that I've seen this behavior on Linux/libwww-perl and Win32/libwww-perl clients. The common denominator really seems to be libwww-perl, which seems to run into these dead-locks after a number of successful requests or right away.
Try a python WWW client with keep-alive and you should observe the same behaviour when you use a browser to effectively steal the socket
P.S. There is no need for small-font footnotes, I almost mistook yours for a signature.
| [reply] |