exaethier has asked for the wisdom of the Perl Monks concerning the following question:

So the scenario here involves a multi-threaded HTTP server. At high level I am accept()ing socket connections in the main thread (via HTTP::Deamon) and passing the fileno of the returned handle to a thread in a worker pool. I use the fileno as perl (or perhaps ActiveState's perl) is unable to pass the file handle class. I then use the fileno in the child thread to re-open the file with using the fileno. However as soon as the main thread moves on to the next connection/request (and the file handle drops out of scope) the file is closed... even though the worker thread is still attempting to use it. I have attempted to re-bless the file handle to UNIVERSAL to prevent the DESTORY function from firing to no avail. Does anyone know how I might prevent this close-on-destroy behavior from trigger. Pseudo-code as follows:

sub mainThread
{
	my $d = HTTP::Daemon->new();

	// $c is a descendent of IO::Handle
	while ( my $c = $d->accept )
	{

		// pass fileno($c) to a thread-shared queue where it will be picked 
		// up by a worker thread

		// at the close of this loop $c drops out of scope and the socket is 
		// closed, even though the workerThread is still using it
	}
}

sub workerThread
{
	my $fileno = shift;

	my $c;
	open $c, '+<&=' . $fileno;

	// generate a response and write to $c

	close $c;
}

The only solution I have found involves maintaining a list of handles in the main thread and dropping them after the worker thread indicates it is done, however there must be a cleaner way to do this.

Thanks in advance for your help!
  • Comment on Preventing IO::Handles from closing on destruction

Replies are listed 'Best First'.
Re: Preventing IO::Handles from closing on destruction
by Eliya (Vicar) on Nov 04, 2011 at 15:07 UTC
    The only solution I have found involves maintaining a list of handles in the main thread and dropping them after the worker thread indicates it is done

    I think this is exactly the way to go if you need to pass file descriptor numbers around for some reason.  If you don't want $c to go out of scope, just keep it around for as long as needed.

      Fair enough. I tried implementing this and hit an unexpected problem. The close() call in worker thread does not actually close the socket. The connection does not actually close until the preserved file handle in the main thread goes out of scope. Unfortunately, this is after the next connection comes in, as the main thread is block in accept() ... no non blocking calls on w32 AFAIK. Any thoughts on how to force the duplicated socket to close?

        On Unix, I would try POSIX::close($fileno), which operates directly on descriptor numbers (not handles).  Not sure if that works on Windows, though, or what the equivalent would be...