jgstratton has asked for the wisdom of the Perl Monks concerning the following question:

It seems I am loosing my file descriptors when using IO::Async::Function in fork mode. This doesn't seem to happen when using this in thread mode(which I cannot use for this...). The only ways around this I can think of is resetting my logger and init'ing my logger multiple times, which I assume is wrong.

Here is basically what the code is doing:

use IO::Async::Loop; use IO::Async::Function; use strict; use warnings; use Log::Log4perl; unless ( Log::Log4perl->initialized() ) { Log::Log4perl::init("~/blah.cfg"); } my $logger = Log::Log4perl->get_logger('main.log1'); $logger->info( "$$: Starting" ); my $loop = IO::Async::Loop->new; my $function = IO::Async::Function->new( code => sub { my ( $in_num ) = @_; my $logThing = LogThing->new(); $logThing->do_it( $in_num ); return( $in_num + 1 ); }, ); $loop->add( $function ); my @futures; foreach my $num ( 2..3 ) { push @futures, $function->call( args => [ $num ] ); } $loop->await_all( @futures ); $function->stop; $logger->info( "$$: Exiting" ); 0; package LogThing; use Moose; use strict; use warnings; use Log::Log4perl; use Try::Tiny; sub do_it { my ( $self, $in_num ) = @_; my $logger = Log::Log4perl->get_logger('main.trans'); print "Sleeping $in_num\n"; sleep( $in_num ); try { $logger->info("$$: Done $in_num"); } catch { warn "Some weird log4perl object conflict $_"; }; } 0;

When executing, I get an "Bad file descriptor" error:

Sleeping 2
Sleeping 3
Some weird log4perl object conflict Cannot write to '/var/log/blah/trans.log': Bad file descriptor at /usr/lib/perl5/site_perl/5.18.2/Log/Log4perl/Appender/File.pm line 267.
Some weird log4perl object conflict Cannot write to '/var/log/blah/trans.log': Bad file descriptor at /usr/lib/perl5/site_perl/5.18.2/Log/Log4perl/Appender/File.pm line 267.

Replies are listed 'Best First'.
Re: “Bad file descriptor” using Log4Perl and IO::Async
by TimTom0123 (Initiate) on Aug 03, 2020 at 13:55 UTC

    We just ran into this 'bug' as well. I assume the original poster has already moved on with his life, but for posterity, it's a simple difference of how IO::Async and perl handle file descriptors by default: perl choosing to keep them alive and IO::Async opting for more isolation and choosing to close all open filehandles that are not enumerated to keep.

    What this means is that there is an additional setup argument in all (or at least all that I know of) IO::Async routines that cause forks that allows you to pass the actions for filehandles you don't want to close. If you have particular files that are the only ones you want to keep open, you can do something fancier, but if all you want is to make the errors go away and go on with your life, the code snippet is pretty simple. If you are going for this brute force approach you'll probably want to try and do this near the initial setup of your logger, or at least before you start creating channels, and try not to have any other files open. Otherwise you're liable to end up performing actions on IO::Async Channels that you probably don't want to mess with. Also, if your forked code ever calls exit, you'll want to follow the advice of perlfork from above and use _exit from posix instead (assuming you are on linux).

    # Skip the builtins, they are all kept and it doesn't hurt to have # them but they are already handled. my @fds = grep { $_ > 2 } IO::Async::OS->potentially_open_fds(); # Keep them all open. Parens are to make the parser trust that # map was given a code reference and not a hash reference my @setup = map { ("fd$_" => 'keep', ) } @fds; # ... later ... my $thing = IO::Async::Whatever( code => sub { ... }, setup => \@setup );