in reply to Bad multitasking (RE: MacPerl???)
in thread MacPerl???

Now let's be fair. Save the following and run it on virtually any *nix system:
#! /bin/sh # Tie up memory perl -e 'push @big, 1 while 1' # Breed sh $0 & # And be a real hog while [ true ] do $0 sleep 1 done
I am pretty sure I have it right.

If I do, run it and unless ulimits have not been properly set you will see hogging in action!

Don't worry about taking hours to request that the system be properly shut down. You will need to do a hard reboot.

My experience is that NT has awful performance on switching processes. I believe that Linux switches processes faster than NT switches threads. However NT in my experience survives casual CPU and memory starvation much better than Linux does. Try a few test Perl scripts out and see if you don't find the same.

OTOH I have seen occasional badly behaved processes (Lotus Notes on a couple of systems comes to mind) which reduce NT to sheer misery. For instance there is some kind of locking conflict which leaves NT at 1% CPU usage.

In general denial of service is a hard problem to solve. NT has worse average behaviour but tries to handle some DoS situations. Linux has better average behaviour but does not even try to think about DoS. It is far too easy to get either into really sad obscure failure modes.

But hey, if reliability is your top concern, why aren't you running an AS 400?

Replies are listed 'Best First'.
(tye)RE: Bad multitasking (RE: MacPerl???)
by tye (Sage) on Oct 10, 2000 at 19:45 UTC

    I've never used Linux much (sure, I've used many flavors of Unix, many of them extensively -- just not Linux). ):

    Thanks. Upon further reflection, I think you may be right. I think I'm just lucky at finding what look like fairly well-written programs that can be coerced into nearly locking up NT while also hogging the CPU. These must be doing something kernelish as well? While a fairly simple endless loop will lock up my Win98 desktop (even though in a 32-bit program), WindowsNT only becomes a bit sluggish.

    As for Unix (and other multi-user) systems, my experience differs from yours. I tend to run such systems in a true multi-user fashion where root never does casual things and non-root users have limits on number of processes, max VM, etc. to prevent accidental system lock-ups. Also, root, the console login, and networking have higher priorities so important things can be done when the system is in trouble. The worst system effect from mere mortal users I've seen there are fork() bombs, but I've managed to recover from those more easily than I have from nearly-lock NT.

    I can't decide what you mean by NT surviving "memory starvation". Without defragging the swap file, NT can easily become quite sluggish if forced to page a lot. And once you have actually run out of page file space under NT you can no longer trust major components of the system and should reboot soon (in my experience).

    While my experience with Unix systems is that performance under heavy paging is more linear and a given process running out of swap space quickly dies so you know who you can trust (and the kernel is protected from such things).

    Now, I'm not trying to claim that Unix is better than NT or even vice versa. I started this just griping about a pet peeve of mine that I aquired because of Remotely Possible but that I've seen repeatedly since. Thanks for helping me realize that it isn't just a CPU-hog problem.

    P.S. When I said it takes hours to request NT to shut down, I wasn't exagerating. Resolution of this problem almost always requires cycling power. But I've seen it so much that I've actually spent the time to see if it was even possible to ever get NT to shut itself down (in some cases because I really wanted to save some changes!). Part of the problem here is that NT shuts down the desktop in such a sequential fashion. If the hog isn't one of the first processes to get the shutdown request, then we have to painstakingly wait for each process to slowly shutdown and possibly prompt the user before the shutdown request will even be sent to the hog. So if you get really lucky and the hog is the first process, it may only take you 20 minutes to receive and push the "End Task" button and get your system back. Otherwise it really can be hours before you give up on saving your 20 minutes of unsaved work and cycle power.

            - tye (but my friends call me "Tye")
      The reason for the different experience is that you were dealing with Unix systems which had proper ulimits set up. The available types of restrictions are appropriate for a multi-user server machine. They are not well suited to the needs of a dedicated server (where one server is expected to take up all available resources) or for a desktop (ditto).

      Given that most Linux systems fall into the latter two categories, the default install of virtually every distro that I have encountered sets up no resource limits. Therefore even a simple fork bomb will utterly crush your typical Linux machine. BSD fanatics take note, the same is true for your average BSD install. To test whether it is true on a command prompt type ulimit, if it comes back with "unlimited" then this applies to you!

      As for memory starvation, my experience runs like this. NT wants a whole process to be in memory at once. So once you begin running low on memory, you feel it almost immediately because switching processes becomes painfully slow. By contrast any modern *nix has demand paging, you may have most of your processes paged to disk without even noticing. But the second the active pages you are constantly hitting exceed RAM, the system hits a wall and suddenly starts grinding. (With proper ulimits you would be very unlikely to hit this point in regular usage.) When you finally run out of memory, processes start dying. :-(Linux in particular has a very stupid process killing algorithm, it is not rare to see key processes like klogd and autofs go down.)-: NT starts degrading much earlier but seems to degrade more smoothly.

      As for serious and nasty NT problems, try right-clicking on the status bar and bring up the task manager. Killing the offending process in the task manager is likely to be much faster than an orderly shut-down. (At least it has been in my experience.)

      And yes, I have heard that those really nasty slow-downs are actually problems within the kernel. Microsoft has a habit of improving the performance of key stuff (eg video) by "integrating" it into the kernel. Every time they do this, the whole system winds up vulnerable to bugs in that system. (The video subsystem in particular is the reason that Ed Curry, mentioned on my home node, told Microsoft that NT 4.0 would be unable to get C2 certification.)

      Ironically this means that not only is it the applications that cause Windows to be unstable, but it is very specifically Microsoft applications that tend to be the worst offenders! (Because they have the know-how to play kernel games to give themselves a speed boost.)

      Repost due to server problems earlier
      Repost due to server problems earlier
      Repost due to server problems earlier