the biggest problem is in the code: ...when running, one thread get the 'lock' and accept a new connection, then 'unlock', and it is blocked at the last line. I had done following test,
This will be affected by the changes required to address the other problems I've already tackled, but there is a particularly important point to make. Your basic architecture has one big flaw at its core:
You are calling $listener->accept() in all your threads.
The normal multi-processing (be it threading or forking), server architecture has a main loop and a single point where incoming connects are accepted. It then either: spawns a thread or process to handle the newly connected client; or passes the client handle to a pre-existing (pre-forked) thread or process to deal with.
With your architecture as it stands, although each worker will be using a CLONED copy of the listener socket at the perl-level, underlying the perl-level structures and data, at some point within the C runtime, the OS, or the tcpip stack, they are all trying to use and control a single socket
And whilst you are applying Perl-level locking on the resource which should prevent any Perl-level sharing problems, it is not at all clear to me what the effect of calling accept() on that single socket from multiple threads will be. Basically, I've never seen it done that way in either Perl or C.
Maybe it is fine if you only enter into the accept state on one thread concurrently (per your locking). But maybe not. And it is quite likely to be affected by the C runtime and/or OS you are running.
Maybe it is a clever way of dodging the 'socket passing' problem, but I'd have to either see a (fully working) example of the technique in use, or code up a (greatly simplified) example of my own to convince myself that it works correctly under load.
Bottom line: there are a lot of basic errors in your code and some questionable architecture. It's not easy to suggest how to fix it up completely, as how you would correct the basic problems, (and whether they would be still be needed), really depends on how you decide to address the architectural issues.
For example, if you take my advise about modifying the per-thread control structures within the threads themselves, rather than passing messages to the main thread asking it to do it, then the reason for having the queue pretty much disappears. All that would leave your main thread to do, is adjusting the number of workers in the pool. But even the way you are doing that is questionable.
The normal method with a pre-forking server is to start a new thread if there is no idle thread available to field a new connection. I see what you are aiming for with your low-water mark mechanism--always having at least two idle threads available--but I haven't managed to push your code sufficiently hard to create the situation where that comes into play. And I am dubious abut he way you have it coded.
Each connection is so brief that its pretty much impossible to test multiple concurrent connections, when creating them manually via browser. It would require some kind of automated client set up to drive it hard enough to test that scenario. But given all the other problems that need fixing, plus the open architectural questions, it is not worth the effort of testing that as things stand.
So, the balls in your court to decide how you are going to proceed from here?
I may get time to throw something together to explore the multiple-listeners question sometime, but the rest will have to wait until you decide how you are going to proceed.
In reply to Re^3: multi thread problem in an simple HTTP server (Q.5)
by BrowserUk
in thread multi thread problem in an simple HTTP server
by bravesoul
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |