renodino has asked for the wisdom of the Perl Monks concerning the following question:

I'm writing a complex threaded server app which listens on a TCP socket (e.g., a web server) (on WinXP, AS 5.8.6, using IO::Socket::INET, IO::Handle).

My initial design used a central ConnectionFactory threaded object to listen()/accept(), then pass the fileno of the created socket over a queue to a free worker thread, which would reconsitute it via IO::Handle::fdopen() and service the client request. While it works in single threaded mode (i.e., no threads), it fails in multithreaded mode during the fdopen() with a "Bad file descriptor" error. Note that the accept()ing thread and the worker threads are "segregated", i.e., the workers do not descend directly from the listener, but are siblings of it.

I've worked around the issue thus far by changing the design:

  1. spawn the worker threads from the acceptor thread after it has created the listen socket
  2. pass the fileno of the listen socket to the worker threads
  3. have the worker threads reconstitute the listen socket via fdopen()
  4. have the listener socket use IO::Select to determine when a connection request has arrived, and then
  5. notify a worker thread, which then calls accept()

While it work for small numbers of connections, I'm concerned it may not handle large numbers (>100) of connections. I've considered letting all the worker threads select() on a nonblocking listen socket, and allowing "nature to take its course", but I'd much prefer a centralized mgmt of load balancing across threads if possible, rather than relying on chaos.

Is this inability to reconstitute file handles across segregated threads expected behavior ? I could undertstand if this were a process based model, but shouldn't threads be more flexible ? Or is this just another Win32ism ?

Update: My bad. I've found/fixed the problem. Corrected code included. Guess I'm too dense to know how to use the modules I write...

My problem was (as BrowserUk suggested) that I lost the original filehandle due to GC, cuz I wasn't using the right API call to send it over to the other thread (TQD::enqueue() doesn't block, TQD::enqueue_and_wait() does)

Here's the corrected code that solves the problem:

use IO::Socket; use threads; use threads::shared; use Thread::Queue::Duplex; use strict; use warnings; # # create a TQD # create 2 threads # wait for them # my $tqd = Thread::Queue::Duplex->new(ListenerRequired => 1); my $thrdA = threads->new(\&threadA, $tqd); my $thrdB = threads->new(\&threadB, $tqd); $thrdA->join(); $thrdB->join(); print "Done.\n"; sub threadA { my $tqd = shift; $tqd->wait_for_listener(); # # open listen socket # pass fileno to other thread # my $listenfd = IO::Socket::INET->new( LocalPort => 9088, Proto => 'tcp', Listen => 10); die "Can't open listener: $!" unless $listenfd; # # now accept a connection # my $fd = $listenfd->accept(); # # !!!WRONG!!! # its enqueue_and_wait to block!!! # # my $resp = $tqd->enqueue('newfd', $fd->fileno()); my $resp = $tqd->enqueue_and_wait('newfd', $fd->fileno()); $listenfd->close(); return 1; } sub threadB { my $tqd = shift; $tqd->listen(); my $req = $tqd->dequeue(); my $id = shift @$req; my $fn = $req->[1]; print "fileno is $fn\n"; my $fd = IO::Socket::INET->new(); die "Can't acquire the socket: $!" unless $fd->fdopen($fn, '+>'); my $tpage = '<html><body> <i><h1>Got your click!!!</h1></i> </body></html>'; my $pglen = length($tpage); my $opage = "HTTP/1.0 200 OK Content-type: text/html Content-length: $pglen $tpage"; $fd->send($opage, 0); $fd->close(); $tqd->respond('newfd', 1); return 1; }

Replies are listed 'Best First'.
Re: Passing sockets between segregated threads
by BrowserUk (Patriarch) on Oct 20, 2005 at 16:43 UTC
    ... it fails in multithreaded mode during the fdopen() with a "Bad file descriptor" error.

    The likely cause is that you are allowing the original file handle to go out of scope and get garbage collected (closed) before the thread to which you pass the fileno has a chance to dup it.

    One way to tackle this is to save a copy of the file handle in a hash indexed by it's fileno within the accept thread and then have a separate queue that the responder threads post the filenos to when they close them. The accept thread then monitors this queue and when a fileno is posted to it, it retrieves it and uses it to delete the relevant key from the socket cache to complete the cleanup.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
      The filehandle isn't going out of scope. If you look at the code snippet, you'll see that threadA() enqueue()'s the fileno to threadB...and Thread::Queue::Duplex::enqueue() is a blocking operation until threadB respond()s...which it doesn't do until it has dequeue()'d and reconstituted the file handle.

        Sorry. I didn't notice you were using T::Q::Duplex, and assumed T::Q.

        However, I recently had exactly the same symptom, "Bad file descriptor" error, trying to dup IO::Socket::INET handles in peer threads, and I cured it, reliably, by the technique I described.

        I'm not familiar with T::Q:Duplex, but I would suggest looking closely at the timing of your code. A few trace statements with HiRes timestamps or simply push a copy of the file handle returned from the accept onto a package scope array and comment out the close. See if it makes a difference.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.