in reply to Re^6: Why are people not using POE?
in thread Why are people not using POE?

You'll notice that I said "multi-(task|thread)ing" originally.

Yup, I did. I thought it sounded dumb considering that "cooperative multi-threading" is basically extinct and hardly applicable to high-performance servers.

BTW, how many requests per second does your "really high-performance" pre-forked server handle?

I don't know off the top of my head. But why ask me? Most of the websites on the internet today use Apache, and thus a pre-forked process model. Ask Amazon or Ticketmaster how they're doing under load, they're both Apache/mod_perl users last I heard.

-sam

Replies are listed 'Best First'.
Re^8: Why are people not using POE?
by dpuu (Chaplain) on Jun 11, 2005 at 21:10 UTC
    ...considering that "cooperative multi-threading" is basically extinct and hardly applicable to high-performance servers Thought I'd jump here and point out that, while I can't speak for the world of servers, cooperative multithreading is alive and well in the world of simulation. Here, it is really important that the results are determinitic -- the same stimuli should always give the same path through the code. I have become very frustrated over the years as support for cooperative multithreading libraries has slowly dwindled away. I have hopes that good continuation support in parrot (and P6) may stem the trend.

    --Dave
    Opinions my own; statements of fact may be in error.

      Imagine ...

      if Parrot had both. You run those algorithms that need determanistic threading using user threads, and you run the user-thread dispatcher within a kernel thread. Your interface (GUI / CLI / browser ) runs in a separate kernel thread.

      The simulation runs perfectly predictably, and the interface can both monitor it's progress and remain responsive to allow the simulation parameters to be adjusted in real time.

      Maybe I'm a dreamer...


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
      "Science is about questioning the status quo. Questioning authority".
      The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
Re^8: Why are people not using POE?
by kscaldef (Pilgrim) on Jun 11, 2005 at 22:15 UTC
    I don't know off the top of my head. But why ask me? Most of the websites on the internet today use Apache, and thus a pre-forked process model. Ask Amazon or Ticketmaster how they're doing under load, they're both Apache/mod_perl users last I heard.

    Amazon and Ticketmaster are great success stories for mod_perl, no question about it. But, from what I know of the systems, their individual servers are not what I would call "really high performance". Nor, for that matter, are most of the web sites on the internet (sort of by definition).

    To give some context, for me "really high performance" is the world from around 10000 qps and up. Come to my talk at YAPC or OSCON this summer and I'll be happy to talk about some of the techniques we use to build systems like this.

    (To be really fair to Amazon and Ticketmaster, their applications are significantly more challenging to get crazy performance out of. Please don't take this as any criticism of them.)

      But, from what I know of the systems, their individual servers are not what I would call "really high performance".

      Your definition is therefore not worth much to me. I'll most likely never work on a system that has to stand up to more load than those boxes!

      Come to my talk at YAPC or OSCON this summer and I'll be happy to talk about some of the techniques we use to build systems like this.

      Do you really use a cooperative multi-threading system at Yahoo? I would be facinated (and very surprised) to hear about that!

      -sam

        I was speaking of individual machines, not of the whole server farm. The total load served by Amazon or TM is quite impressive, however they use a lot of machines to do it. From what I know (more about TM than Amazon) though, the query rate handled by a single machine is not huge.

        Yes, we really do use cooperative multi-blah systems at Yahoo, at least for some applications. Some of these more resemble a threaded programming model, some are more like event-based state machine models.

        The most common place that I can think of where you'd like to use a design like this is when you have a service-oriented architecture and the server that originally receives a user request does little more than analyze the request and then make a number of sub-requests to other services. An event-driven state machine is generally the most efficient and scalable way to multiplex really large numbers of simultaneous connections. With the Apache pre-forked process model, each child is tied up for the whole time that you are waiting for the subrequests to return. There's a fairly low limit on how many active processes you can have before your machine ends up spending more time context switching than doing useful work.