in reply to (crazyinsomniac) Re: MacPerl???
in thread MacPerl???

Note that this issue is very Mac specific.

One of the basic jobs of an operating system is to coordinate between different processes/threads. The two basic choices are cooperative multitasking and pre-emptive multitasking.

In cooperative the OS hands control to a process/thread and then waits for control to be handed back. In pre-emptive, control is handed over but the OS will grab control back when it wants. It will want control back after various interrupts, when the process asks for the OS to do something, or at the end of your time slice.

All of the issues that I discussed in Threads vs Forking (Java vs Perl) simply do not exist for cooperative multitasking. There is never any problem with unexpected interruptions because nobody is going to interrupt you. But on the flip side any process can lock up the whole system permanently. Furthermore pre-emptive both can accomplish background tasks faster and is more responsive. (Ironically, throughput on batch jobs can be higher in cooperative - there is no need to worry about locking and unlocking.)

MacOS is cooperative. DOS was. Ditto Windows 3.1. Microsoft claims that the Win9x line is pre-emptive, but in truth it is a hybrid, cooperatively multitasking 16-bit programs and pre-emptively multitasking 32-bit ones. (This was probably wise, if you pre-emptively multitask a process that was written for cooperative multitasking, a lot of race conditions come out of the woodwork.)

By comparison Windows NT, BeOS, any form of *nix, VMS and so on are all pre-emptively multitasked. Indeed so is OS X. Virtually anything billed as heavy-duty, certainly anything that can use 2 CPUs, have multiple users at once, etc are pre-emptive.

What this means is that on the Mac a single poorly-written process can lock up the system. Similarly if you go to the menu bar, while you are viewing the menu nothing else is going on. Your webserver is unavailable, your print jobs are not spooling, and SETI is going on without you...

This is true for virtually no other OS you are likely to encounter today.

Replies are listed 'Best First'.
Bad multitasking (RE: MacPerl???)
by tye (Sage) on Oct 10, 2000 at 18:29 UTC

    MacOS is cooperative.[....]

    By comparison Windows NT [is] pre-emptively multitasked.[....]

    What this means is that on the Mac a single poorly-written process can lock up the system.

    Note that a single program that isn't even very poorly written can quite effectively lock up Windows NT. All you need is something that takes up a lot of CPU. Designing a complex program such that it will never use "too much CPU" is extremely difficult. This is a problem that is best solved by the pre-emptively multitasking operating system. Unfortunately, WindowsNT "solves" this problem so badly that it really isn't solved at all.

    Once a CPU hog starts running, WindowsNT manages to let other things run but at such a slow pace that it can takes hours to simply request that the system be shut down. Finding and stopping the hog is unthinkable. Anything not via the desktop is useless because response is so slow that time-outs are all that can be had.

    So one badly written process probably can't completely lock up NT as far as scheduling is concerned, but the scheduling is not good enough for this to have much pracitcal meaning.

    And now, back to discussions remotely related to Perl... :)

            - tye (but my friends call me "Tye")
      Now let's be fair. Save the following and run it on virtually any *nix system:
      #! /bin/sh # Tie up memory perl -e 'push @big, 1 while 1' # Breed sh $0 & # And be a real hog while [ true ] do $0 sleep 1 done
      I am pretty sure I have it right.

      If I do, run it and unless ulimits have not been properly set you will see hogging in action!

      Don't worry about taking hours to request that the system be properly shut down. You will need to do a hard reboot.

      My experience is that NT has awful performance on switching processes. I believe that Linux switches processes faster than NT switches threads. However NT in my experience survives casual CPU and memory starvation much better than Linux does. Try a few test Perl scripts out and see if you don't find the same.

      OTOH I have seen occasional badly behaved processes (Lotus Notes on a couple of systems comes to mind) which reduce NT to sheer misery. For instance there is some kind of locking conflict which leaves NT at 1% CPU usage.

      In general denial of service is a hard problem to solve. NT has worse average behaviour but tries to handle some DoS situations. Linux has better average behaviour but does not even try to think about DoS. It is far too easy to get either into really sad obscure failure modes.

      But hey, if reliability is your top concern, why aren't you running an AS 400?

        I've never used Linux much (sure, I've used many flavors of Unix, many of them extensively -- just not Linux). ):

        Thanks. Upon further reflection, I think you may be right. I think I'm just lucky at finding what look like fairly well-written programs that can be coerced into nearly locking up NT while also hogging the CPU. These must be doing something kernelish as well? While a fairly simple endless loop will lock up my Win98 desktop (even though in a 32-bit program), WindowsNT only becomes a bit sluggish.

        As for Unix (and other multi-user) systems, my experience differs from yours. I tend to run such systems in a true multi-user fashion where root never does casual things and non-root users have limits on number of processes, max VM, etc. to prevent accidental system lock-ups. Also, root, the console login, and networking have higher priorities so important things can be done when the system is in trouble. The worst system effect from mere mortal users I've seen there are fork() bombs, but I've managed to recover from those more easily than I have from nearly-lock NT.

        I can't decide what you mean by NT surviving "memory starvation". Without defragging the swap file, NT can easily become quite sluggish if forced to page a lot. And once you have actually run out of page file space under NT you can no longer trust major components of the system and should reboot soon (in my experience).

        While my experience with Unix systems is that performance under heavy paging is more linear and a given process running out of swap space quickly dies so you know who you can trust (and the kernel is protected from such things).

        Now, I'm not trying to claim that Unix is better than NT or even vice versa. I started this just griping about a pet peeve of mine that I aquired because of Remotely Possible but that I've seen repeatedly since. Thanks for helping me realize that it isn't just a CPU-hog problem.

        P.S. When I said it takes hours to request NT to shut down, I wasn't exagerating. Resolution of this problem almost always requires cycling power. But I've seen it so much that I've actually spent the time to see if it was even possible to ever get NT to shut itself down (in some cases because I really wanted to save some changes!). Part of the problem here is that NT shuts down the desktop in such a sequential fashion. If the hog isn't one of the first processes to get the shutdown request, then we have to painstakingly wait for each process to slowly shutdown and possibly prompt the user before the shutdown request will even be sent to the hog. So if you get really lucky and the hog is the first process, it may only take you 20 minutes to receive and push the "End Task" button and get your system back. Otherwise it really can be hours before you give up on saving your 20 minutes of unsaved work and cycle power.

                - tye (but my friends call me "Tye")