in reply to Coro, AnyEvent and performance

What you want is for a kernel running outside of your application to forcefully take the CPU from you whenever it feels it necessary to do so. That is True Threading, which Coro is decidedly not.

Your problem in your example code is that your loop is very, very short. You're computing a simple exponential (ok, near the end it's not so simple). The time for Coro/AnyEvent to figure out if there are any other events to process, process them, and come back to you, quite simply overwhelms your loop.

If your loop was doing "real work" in the middle, you might find the overhead of poll/cede to be far less significant.

For example, if your loop was using, say, AnyEvent::HTTP to query a web site, you wouldn't even need to poll/cede. Or if your loop was rendering some Template Toolkit output, that may be heavy enough that ceding each time through the loop may be minor.

The reality is, these things have overhead that they need to go check everything before coming back to where you are. As long as you can push your hard work off into asynchronous pieces of work that need to wait for things (such as sockets, e.g., TCP/IP requests, or user events), Coro will net you a speed up. But if you're doing something computationally trivial between cede's, that's not going to seem like a benefit :-)

Replies are listed 'Best First'.
Re^2: Coro, AnyEvent and performance
by mcrose (Beadle) on Jul 06, 2011 at 17:38 UTC

    That's what I suspected was the case. So in a situation where I want to continually interleave a somewhat trivial computational task as a background thread but still stay responsive to intermittent events (say an order or two of magnitude less actual computational work), I'd be better off just forking off the computational task, having the parent run the event system, then passing back the computational results as an event to the parent via IPC?

      That all depends. If you don't need to share data at all, BrowserUk's solution may be simplest. If you need to share data, it becomes a bit more of a mess. Coro allows you to go on without worrying about simultaneous access to shared resources - you only need semaphoring for when you want to keep someone from modifying something while your thread may be ceded (your thread "blocks" on some asynchronous access). And this should be pretty rare. With full-blown threads, it can get more complex in a hurry - the more complex the data that you need to share, the more thought you need to provide. Not that it can't be worth it, but weighing the options is probably prudent.

      Update: "more thought you need to provide" includes things like semaphoring, not merely the act of sharing a hash. (Though I do wonder if the copying of the non-shared hash to the shared hash is atomic - is there a race condition there?)

        the more complex the data that you need to share, the more thought you need to provide.

        That just is not true. You need to share a hash full of data, t'is easy:

        #! perl -slw use strict; use bignum; use threads; use threads::shared; use Data::Dump qw[ pp ]; my %results :shared; async { my %comps; for my $i ( 1 .. 1e6 ) { $comps{ $i } = $i ** $i; } %results = %comps; }->detach; while( sleep 1 ) { print "timer event"; last if %results; } pp %results;

        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
          A reply falls below the community's threshold of quality. You may see it by logging in.