http://qs1969.pair.com?node_id=738893

Eyck has asked for the wisdom of the Perl Monks concerning the following question:

I'm watching for events in the directory, and I seem to be missing something fundamental (and/or trivial). Here's how I do it:
while ($keepOnWatching) { $inotify->watch($watchpoint,IN_ALL_EVENTS); @events=$inotify->read;# sleep, and wake when events arrive # undef $inotify processEvents(@events); }
this works nice, except that it misses events that arrive while im in 'processEvents', the obvious fix would be this:
$inotify->watch($watchpoint,IN_ALL_EVENTS); while ($keepOnWatching) { @events=$inotify->read;# sleep, and wake when events arrive processEvents(@events); }
but this will use 100% cpu in 'processEvents', because some of the actions in processEvents get caught by inotify watch. I can't seem to figure out a way out of this conundrum, this looks to be similar to the problem speakerphone solves, I just don't know how to approach this.

Replies are listed 'Best First'.
Re: How to wait for events, and not lose any, while processing them ?
by jfroebe (Parson) on Jan 26, 2009 at 10:42 UTC
Re: How to wait for events, and not lose any, while processing them ?
by BrowserUk (Patriarch) on Jan 26, 2009 at 15:57 UTC

    Try it this way (untested):

    use threads; use Thread::Queue; sub processEvents { my $Q = shift; while( my $event = $Q->dequeue ) { ## Process events } } my $Q = new Thread::Queue; my $thread = threads->create( \&processEvents, $Q ); $inotify->watch($watchpoint,IN_ALL_EVENTS); while ($keepOnWatching) { $Q->enqueue( $inotify->read; ) }

    All the threading overhead is up-front, leaving the only processing in the 'event loop', pushing the returned values onto shared memory, which is barely slower than assigning them to your array.

    If you find that the events are arriving faster than the thread can process them--and if you have multiple cores--start a second, third, fourth thread.

    If this were Win32, I'd boost the priority of the main thread to real-time to ensure that it was favoured by the schedular in the event that both it and one of the processing threads became eligable to run at the same time. But I don't know how to do that on *nix.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      This is bang-on, and takes me back to writing word processing code in assembler back in the late 70s.

      When an interrupt happens, the processor suspends whatever it's doing; it stuffs all of the register values, including the program counter, onto a stack, then calls the appropriate interrupt service routine. When that (very slim) routine is finished, it executes not a normal RET (subroutine return), but an RTI (return from interrupt) which pulls not only the program counter from the stack, but the values of all the registers. Thus, as far as the original code knows, nothing happened (OK, this depends on the processor architecture, but you get the idea).

      Even cooler -- while an interrupt is being processed, that level of interrupt and everything below it is disabled, which means any interrupt that happens during the routine is ignored until the RTI is executed. If a higher level interrupt occurs, the obvious happens -- registers are pushed, and the processor again jumps to another interrupt service routine.

      So, when an event occurs, you want to have the shortest (fastest) possible interrupt service routine handle the event, so that the processor can get back to whatever it was doing, such as drawing text on the screen, processing keyboard input, or waiting in an idle loop, without dropping any data. The idle loop runs around and waits for things to magically appear in the queues, via the interrupt service routines. It can then do the laborious process of figuring out what complicated processing needs to be done with the keystroke, without the worry that another event is going to come along and perhaps be dropped.

      Problems occur when the interrupts come too thick and fast for even the slender interrupt service requests to deal with -- either that of the code that handles the incoming data can't empty the queues fast enough.

      Fun stuff, and good to remember in this context.

      Alex / talexb / Toronto

      "Groklaw is the open-source mentality applied to legal research" ~ Linus Torvalds

        That is not exactly the problem here, imagine if you will, that your interrupt-handling routine calls interrupts herself.

        Now you have the same problem as I have - the 'disable interrupts' portion is similiar to the first version of the code - I stop listening for 'interrupts' while processing. The problem is that in this case, if interrupts come, then I ignore them.

      This is great, textbook example of handling events with additional thread, thanks.

      The problem however, is that the 'processEvents' CAUSES new events to be generated, thus, without echo-filter

      while ($keepOnWatching) { if (ThisIsNotMyOwnEcho()) {# echo-filter processEvents(); } };

      this solution changes 1-cpu burning into multiple-cores burning program.

        If you only describe half the problem, (less--how do you distinguish an "echo" from the "real thing"?), you only get half a solution. That said, the architecture stays the same.

        You either filter events before you queue them, or after you dequeue them, whichever is more convenient and timely.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re: How to wait for events, and not lose any, while processing them ?
by cdarke (Prior) on Jan 26, 2009 at 10:47 UTC
    It is a "feature" of inotify on Linux that if you do not read events fast enough, you will miss them (certain Windows interfaces suffer from the same problem).

    I suggest that you process the events in a different thread. You might also want to checkout Linux::Inotify2.
      The code above starts with use Linux::Inotify2, the problem is with processing the events, I don't quite get it how putting up another threads solves the issue, you still need to pause to push the data to another thread.
        Eyck:

        Frequently, event handlers are done in a callback style with very little in the way of resources. (For example, in interrupt handlers, the interrupts may be disabled during the handler body, preventing other interrupt-driven events from being noticed.) So typically, you store the request with as little processing as possible, to allow the event system to get back to its job of collecting events.

        Then your other thread can pull events off the queue and process them. That way, if you have a rapid flurry of events, they'll stack up in the queue. If you tried handling them in the event handler, the rapid flurry of events could be lost.

        Please note that this is a "hand wavy" explanation because there are many similar systems in which the implementation details are different, and I don't know anything about the Linux::Inotify2 package, so I can't comment on any specifics.

        ...roboticus