Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked
 
PerlMonks  

Sharing a filehandle with an asynchronous event

by stevieb (Canon)
on Jan 23, 2018 at 15:52 UTC ( [id://1207765]=perlquestion: print w/replies, xml ) Need Help??

stevieb has asked for the wisdom of the Perl Monks concerning the following question:

Esteemed Monks and friends...

I'm just putting the finishing touches on a rather large and complex project and noticed an issue with my logging mechanism. Trying to simplify things for clarity, I share an object that contains a logging object that holds an internal file handle for writing. I then spin off an async event that runs at every X seconds. This event receives a copy of the object that contains the log object.

All parts of the application write to the log file just fine, except for the code that runs within the event. Here is a *very* dumbed down example of what I mean:

use warnings; use strict; use Async::Event::Interval; use Logging::Simple; my $log = Logging::Simple->new( file => 'test.log', write_mode => 'w' ); my $count = 1; my $e = Async::Event::Interval->new( 1, sub { print "running poll $count\n"; $log->_0('running poll $count') ; $count++; } ); $e->start; for (0..3){ $log->_0('logging in main'); sleep 1; } $e->stop;

What I need is for all $log->_0() entries to go into the specified file, but the log file ends up looking like this, with only the main log entries making it to the file:

[2018-01-23 07:39:04.604][lvl 0] logging in main [2018-01-23 07:39:05.605][lvl 0] logging in main [2018-01-23 07:39:06.605][lvl 0] logging in main [2018-01-23 07:39:07.605][lvl 0] logging in main

The Async::Event::Interval is a distribution that simply runs a specified subroutine every X seconds (in this case, 1). It's exceptionally basic, and I wrote it for a single purpose (well, that, and to learn). In the above case, nothing is passed in, the event simply uses the $log object from within the file scope itself.

Is there a way to share a handle like this? If not, can anyone recommend the proper way to do these things (I won't object to using a different async-event type distribution if necessary)?

Thanks,

-stevieb

Replies are listed 'Best First'.
Re: Sharing a filehandle with an asynchronous event
by thanos1983 (Parson) on Jan 23, 2018 at 16:40 UTC

    Hello stevieb,

    I am not sure if you only want to use your module stevieb9/logging-simple or any logging module e.g. Log::Log4perl.

    I put together a simple example using the default code from the documentation from Async::Event::Interval and your info that you want to log.

    Sample of code bellow:

    If this is not something that you are looking for, let me know if I understand not correctly and I will try to come up with something else.

    Hope this helps, BR.

    Seeking for Perl wisdom...on the process of learning...not there...yet!

      This is actually great! To be honest, I've never used Log::Log4perl before, so this got me to doing some testing and reading of the very informative docs. It does what is needed here so I do believe I am going to swap out for this logging distribution in this case. There's even config file directives to eliminate race-type conditions (ie. overlap).

      That said, I'm still wondering if there's a real, feasible way to "share" a file handle across procs (forks, really). I don't believe there is, but I'm still open to hear input. Even at a C level, it doesn't appear very trivial to get things to run smoothly all of the time where eliminating overlap can be made consistent.

      Update

      "I put together a simple example using the default code from the documentation from Async::Event::Interval and your info that you want to log."

      That's awesome that you actually tested out the SYNOPSIS of the distribution :)

        I'm still wondering if there's a real, feasible way to "share" a file handle across procs (forks, really)

        It depends on the system you are running on. Under Linux one can share file descriptor even between processes (forks). One can open file in append mode, use buffering based on EOL character and then the system will make sure that messages from different processes are written line-by-line. That will put lines from one process between lines from another process, but won't corrupt lines.

        Of course file handle is not file descriptor, so passing of "file handles" around is a little bit more work, but still is quite possible.

        With multi-threaded processes things are similar. Even though there are internal mutexes (though I'm not sure here about perl), still one has to make sure that buffering is line-oriented and file descriptor is in append mode.

        Hello stevieb,

        I am glad that it worked for you my proposed solution. To be honest when ever I need to do some logging I use this module. It is so powerful and flexible to do many many things.

        Regarding: That said, I'm still wondering if there's a real, feasible way to "share" a file handle across procs (forks, really). I was reading yesterday the module AnyEvent that maybe can do what you are asking for. Unfortunately I did not spend time to experiment with it but maybe you can give it a try.

        Never the less if you manage to resolve your problem update us also so we can have a reference for a possible similar case in future :)

        BR / Thanos

        Seeking for Perl wisdom...on the process of learning...not there...yet!
Re: Sharing a filehandle with an asynchronous event
by stevieb (Canon) on Jan 23, 2018 at 16:04 UTC

    Here's a bit more of a simple example that uses a file handle directly, as opposed to having it hidden in the log object:

    use warnings; use strict; use Async::Event::Interval; open my $fh, '>', 'test.log' or die $!; my $count = 1; my $e = Async::Event::Interval->new( 1, sub { print $fh "in event: $count\n"; $count++; } ); $e->start; for (0..3){ print $fh "write to file in main\n"; sleep 1; } $e->stop;
Re: Sharing a filehandle with an asynchronous event
by Anonymous Monk on Jan 24, 2018 at 15:10 UTC
    If it were me, I think that I would just have each process open the file-name in shared read/write mode and call it good enough. Of course rows will be added to the file in unpredictable order. I'm not sure that adding complexity is really buying you much.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://1207765]
Approved by marto
Front-paged by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others drinking their drinks and smoking their pipes about the Monastery: (3)
As of 2024-04-25 07:54 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found