Please don't be complacent about the ability of different operating system to make viruses a non-problem.
IIRC try this on Linux:
#! /bin/sh
$0 &
perl -e 'push @big, 1 while 1' &
while true
do
$0
sleep 1
done
Yes this is fixable with the right set of rlimits. But
crafting that set is surprisingly hard to do, and is hard to do without getting in the way of having a usable desktop. (FWIW Windows NT does far better with this particular DoS. I have heard that that is partly because its memory management subsystem is designed after what VMS does.)
Also on any OS you care to name, if you keep content you care about as yourself, a nasty virus can cause a problem for you.
Plus often the malicious just want access to resources. For instance that works fine for DDoS. Modify someone's .login so your zombie starts up routinely, that is enough to waste bandwidth and improve your ability to lay waste to someone else's site.
Plus on any Unix-like OS I know of except OpenBSD, it isn't that hard to find, install, and execute a rootkit given local access. This process can even be automated.
And, of course, with access to CPAN (which was just compromised...) you can put up code that many people will run as root. Gosh, darn.
Now the Unix world has improved since the days when Robert Morris accidentally shut down the Internet. But too many now no longer remember that lesson, and Windows definitely does not. Personally a couple of years ago I became convinced that it was a matter of time until a virus or worm came around that can do the same thin that the Morris worm did - be able to hit a large fraction of machines it encounters and have a generation time that does not depend on human error - and shut down the Internet again. (The example I like to give is a virus that can propagate itself through a buffer overflow in a TCP/IP stack.)
The EROS people claim that the problems are inherent in the security model we all use. I accept their arguments that the commonly used security model is inherently flawed. I find their arguments that their model fixes the issue to be plausible, but I would need to see it put to the test before I am persuaded.
However more fundamentally than that, security is a cost center for businesses. The immediate response in any healthy business to costs is to ask how to offload the cost to someone else, or failing that to find the minimum that can be paid without harming revenue. As you will discover if you read your license agreements, that has been definitely going on in the software industry for a long while. All too often closed source is used as an excuse to ignore dealing with basic security mistakes. Like backdoors.
And the open source zealots can stay quiet as well. The economic realities are the same. According to Honeynet the expected lifetime of a default Red Hat 6.2 server online is under 72 hours. As Sourceforge found out, open source developers cheerfully walk up to public access terminals and type critical passwords into ssh. Public access terminals whose security they know nothing about. Also it isn't a secret that a significant portion of open source projects are written in C and submissions are not closely audited for security mistakes like buffer overflows. Or at least are not audited by the project maintainer and not for a long time by anyone with a white hat. But if you were discovered to have had a buffer overflow in a patch, do you think people would suspect you of anything...?
Gah, I should stop ranting about (in)security before I depress myself. :-( |