in reply to Re^2: RFC: Using 'threads' common aspects
in thread RFC: Using 'threads' common aspects

Oh, come now, mine very-public opponent (and quite against my will) ... sure it does.

The multiprogramming level, whatever you call it, is “the (maximum...) amount of simultaneous activity that the program in question will attempt.”   And that is, the number of threads in the pool.   Whether the system has 10 requests to do or 100, if the MPL is set to 10 (by whatever means), there are 10 threads working and there are 90 requests in the queue.   This avoids having the system overcommit itself, overload virtual memory or some other resource, and smash into the bloody wall of exponentially-degrading completion times.   Throughput (as perceived by the system’s clients) is controlled by the queue-selection algorithms, but the system will not smash its own face under heavy load.

My other advice is simply, model and measure the thing as best you can and figure out what sort of workload-management calisthenics these data tell you is actually required.   It is very easy to build a system that is truly more complex than it need be, and that, for being so complex, is actually less reliable.   I’ve done it; so have many others.   It’s just like biology:   “in a perfectly-designed experiment, under the most carefully controlled conditions, the little mouse will do as he damn well pleases...”

Replies are listed 'Best First'.
Re^4: RFC: Using 'threads' common aspects
by BrowserUk (Patriarch) on Jan 13, 2011 at 19:44 UTC
    The multiprogramming level, whatever you call it, is “the (maximum...) amount of simultaneous activity that the program in question will attempt.” And that is, the number of threads in the pool.

    I'm sorry, but you are so wrong it is hard to know quite where to start correcting you. If your mental model of modern, pre-emptive, multi-tasking, multi-threading OSs, is the long abandoned early attempts at multiprogramming from the 1960s, it becomes clear why many of your other wisdoms on threading are so far off base.

    There were no threads, and nothing simultaneous. Nothing pre-emptive; nor even cooperative. The multi-programming level controlled the number of tasks that were held in memory, and that is it. The idea was that when (if) one of those tasks did some peripheral IO, then control could be transferred to one of the other memory resident tasks. But if a task did no IO, then the other tasks never got a look in until the first finished. The aim was simply to improve CPU utilisation, nothing more.

    May be this will clarify things for you.

    There is an extremely tenuous analogy between MPL and the thread pool model, but it is as between a modern, grid-synchronised, computer-controlled traffic lights network, and a policeman on a box waving his arms around.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.