<html> I just posted my first Obfuscated code It got me to thinking how "easy" it would be to write a perl virus, obfuscate and encrypt it, and call it in a wrapped up system or exec. Post it in the OBfuscated code section and see who bites. Now I KNOW there are plenty of SAINTS around here, but something COULD slip through.

Now I, of course, would never do such a thing, but someone could. If you aren't sure of excatly what a program does, DO NOT EXECUTE it! It could delete c:, / , or any number of evil things.
WrongWay

Replies are listed 'Best First'.
Re: Obfuscation and viruses
by jepri (Parson) on Jun 17, 2001 at 16:40 UTC
    This is certainly a problem for people on Win9x machines, and fortunately no-one else. Under win9x there are some tricks you can do, like temporarily write-protecting your hard drive(there is a program for that), or switching it off or whatever.

    Clever DOS coders can write a quick assembly routine to grab control of the BIOS and DOS read/write routines.

    However in Win9x you are stuffed. The best you can hope for is that a sophisticated virus scanner will detect the script trying to erase your windows directory.

    In short, there is no hope for you if you continue to run Win9x. The solution is to create a dual boot NT/Win9x system. Win2000 is good (suprisingly good, actually). Dual boot linux us better, the best way to do that is to install Phat Linux or Dragon Linux. These two products sit in a folder just like normal windows programs, and start up when you double click on them. Very easy.

    All Linux/BSD users: would you survive a rm -rf ~/*? Don't forget to run unknown scripts as another user who doesn't have access to all your documents!

    ____________________
    Jeremy
    I didn't believe in evil until I dated it.

      Please don't be complacent about the ability of different operating system to make viruses a non-problem.

      IIRC try this on Linux:

      #! /bin/sh $0 & perl -e 'push @big, 1 while 1' & while true do $0 sleep 1 done
      Yes this is fixable with the right set of rlimits. But crafting that set is surprisingly hard to do, and is hard to do without getting in the way of having a usable desktop. (FWIW Windows NT does far better with this particular DoS. I have heard that that is partly because its memory management subsystem is designed after what VMS does.)

      Also on any OS you care to name, if you keep content you care about as yourself, a nasty virus can cause a problem for you.

      Plus often the malicious just want access to resources. For instance that works fine for DDoS. Modify someone's .login so your zombie starts up routinely, that is enough to waste bandwidth and improve your ability to lay waste to someone else's site.

      Plus on any Unix-like OS I know of except OpenBSD, it isn't that hard to find, install, and execute a rootkit given local access. This process can even be automated.

      And, of course, with access to CPAN (which was just compromised...) you can put up code that many people will run as root. Gosh, darn.

      Now the Unix world has improved since the days when Robert Morris accidentally shut down the Internet. But too many now no longer remember that lesson, and Windows definitely does not. Personally a couple of years ago I became convinced that it was a matter of time until a virus or worm came around that can do the same thin that the Morris worm did - be able to hit a large fraction of machines it encounters and have a generation time that does not depend on human error - and shut down the Internet again. (The example I like to give is a virus that can propagate itself through a buffer overflow in a TCP/IP stack.)

      The EROS people claim that the problems are inherent in the security model we all use. I accept their arguments that the commonly used security model is inherently flawed. I find their arguments that their model fixes the issue to be plausible, but I would need to see it put to the test before I am persuaded.

      However more fundamentally than that, security is a cost center for businesses. The immediate response in any healthy business to costs is to ask how to offload the cost to someone else, or failing that to find the minimum that can be paid without harming revenue. As you will discover if you read your license agreements, that has been definitely going on in the software industry for a long while. All too often closed source is used as an excuse to ignore dealing with basic security mistakes. Like backdoors.

      And the open source zealots can stay quiet as well. The economic realities are the same. According to Honeynet the expected lifetime of a default Red Hat 6.2 server online is under 72 hours. As Sourceforge found out, open source developers cheerfully walk up to public access terminals and type critical passwords into ssh. Public access terminals whose security they know nothing about. Also it isn't a secret that a significant portion of open source projects are written in C and submissions are not closely audited for security mistakes like buffer overflows. Or at least are not audited by the project maintainer and not for a long time by anyone with a white hat. But if you were discovered to have had a buffer overflow in a patch, do you think people would suspect you of anything...?

      Gah, I should stop ranting about (in)security before I depress myself. :-(

        No need to depress youself :) We live in an imperfect world, and do the best we can. Often, our best is enough.

        There are many subleties to security mangament, and I am no expert in these affairs. But I can spot that your post is very focussed on technical problems and solutions. Some of the numbers you quote seem a little off to me, but I respect the depth and breadth of your knowledge on matters technical. If you say there's a problem, there is a problem. However technical issues are only one part of proper security management, which is just part of ordinary risk management. Looking at the whole situation shows that the situation is not so bad.

        I'd like to mention the risk-damage-payoff matrix. You've probably seen it, so I'll just mention it here for completeness, and for any other readers who haven't. You can make it very complicated, but it looks kind of like this (excuse ascii text):

        |harmless| mild |catastrophic| ----------+--------+------+------------+ h.unlikely| 0 .2 .7 unlikely | 0 .3 .8 maybe | .2 .5 .9 likely | .3 .7 1 certain | .4 .9 1

        The table is filled out with some values representing the amount of time/attention/effort you should spend safeguarding against the threat. Filling out the matrix is a difficult thing to do, and depends on the situation. You can almost ignore threats that are harmless and unlikely. If you are faced with any threats that are certain and catastrophic you should eliminate them, or find someone to blame. In between, you should be deploying an appropriate response.

        Now I can evaluate security threats more effectively. I'll go through a few scenarios that tilly mentions, and one he doesn't:

        * Crashes computers - The fork bomb
        Harmless and unlikely. At the worst I have to get up and power-cycle the server. I can cope with this.

        * Compromises data - The root kit
        Maybe and mild damage. A root kit means that somebody wants the machine to keep working. As long as it keeps serving, our business is not lost. I can cope with a root kit, and deal with it at my leisure.

        * Destroys servers - Fork bomb replaced with rm / -rf
        Catastrophic and unlikely (why do this when you've gone to the effort to hack my machines?). This would sink our business. Good thing I keep backups. With backups the damage is 'mild' - loss of business due to downtime and some data loss.

        Technical solutions always go hand in hand with management and procedural solutions. You tell me that my machine could be hacked once every three days? Fine, I'll hire someone to rebuild the machine every three days. I'm much more worried about network DoS attacks, because I can't control or minimise my risk there.

        Most security problems occur not because of technical flaws, but because people are intent on shooting themselves in the foot. They write their passwords on post-it notes and stick it to the monitor. They write their PIN numbers on their ATM cards. We will eventually have secure operating systems, but the idiots running them will manage to compromise security by ignoring procedures that could protect them because these procedures are inconvenient. The only way to really secure something is to set it in concrete and then dump it in the Grand Mariner trench. If you want people to actually use it, you have to accept the risks, and start working to cope with them.

        ____________________
        Jeremy
        I didn't believe in evil until I dated it.

Re: Obfuscation and viruses
by myocom (Deacon) on Jun 17, 2001 at 19:46 UTC

    Agreed. For that matter, as other monks have pointed out before (Dominus in particular, if I recall correctly), how often do you check the code in CPAN modules? They could also easily contain malicious code. I know I rarely (I'd say never, but I've checked at least once) see what the source says before I try using the module...

      I never check, but I figure I'd hear about it pretty fast if there were a problem. Accountability is a factor here, since it would be possible to track down a submitter and slap them with a large-sized lawsuit.

      Also, the CPAN testers would be likely to spot a script that did nasty things.

      Finally, Perl should run with the permissions of the caller, so a malicious module would still be unable to affect the entire system.

      I do agree with the basic fear though. A module that is useful to root user could be booby trapped to mail /etc/password and the computers IP to a USENET group or something similar.

      ____________________
      Jeremy
      I didn't believe in evil until I dated it.

Re: Obfuscation and viruses
by grinder (Bishop) on Jun 18, 2001 at 14:30 UTC

    This is a good point, and it's something that always been something that's worried me at the back of my mind, for the whole point of this section is to convince people to download load code whose meaning is purposely rendered opaque... and then get them to run it!

    There are a number of actions, depending on your level of paranoia, that you can take:

    • Try and understand how the code works before blindly running it. Use perl -MO=Deparse to reformat the script to something reasonable. Or use Perltidy. Note that some scripts gleefully include syntax contstructs designed to make Deparse melt down on purpose. And code embedded in variables is impervious to this approach.
    • Only run code from monks you know, or monks that other monks can vouch for. This means: don't run obfus from someone whose account is less than a month old. Wait until they "prove their worth" with posts in other parts of the monastery. Check their homenode. Find out if they have any other monks who are friends or colleagues. Downside: note that you still can't really be sure where the code came from. The 59 /e obfu was posted by BooK, but that only means it came from a computer that was successfully authenticated against the perlmonks BooK account. That does not necessarily mean Philippe Bruhat, a person I will personally vouch for, actually posted the code.
    • Run the code step by step in the debugger. This is a particularly effective method for understanding how the code does its thing. Downside: some scripts gleefully include code to bust the debugger.
    • Run the code as nobody, or similar unprivileged account.
    • Run the code in a chroot'ed jail.
    • Run the code in a Safe compartment.

    If you are unsure, wait. Wait until you see replies posted to the node. If in doubt, ask a question in the Chatterbox. If a trojan is ever posted, and one day, one will be, it will be spotted quickly and the appropriate steps will be taken.

    Semantic quibble: it's not a problem of viruses per se but rather one of trojans.


    --
    g r i n d e r
Re: Obfuscation and viruses
by buckaduck (Chaplain) on Jun 18, 2001 at 22:44 UTC
    I notice that a few people have mentioned that untrusted programs should be run as another user such as "nobody". This neglects the fact that many people don't have the ability to do this, unless they happen to be UNIX or Linux sysadmins. I'll admit that I am one, but most of the Perl programmers I know at work are not.

    Besides, are you sure that there isn't any area on your hard drive that the user "nobody" has write access to? If your server has a lot of users, you can bet that they are making some of their files world-writable. My users do, quite often.

    The bottom line is that a program from an untrusted source (or a trusted source that has been hacked) has a potential for danger on most servers. So beware...

    buckaduck

      Indeed merlyn has an example that works on many forms of Unix which allows "nobody" to take down a machine using the fact that /tmp is world writeable and there is a regular cron run as root that assumes filenames don't include returns.
Re: Obfuscation and viruses
by belg4mit (Prior) on Jun 18, 2001 at 20:10 UTC
    Well yes of course it's possible.

    But then again you ought to be running something like that with -w, taint, safe, and penguin, as an unprivelged user...

    :-P

      D'oh! and strict
Re: Obfuscation and viruses
by beretboy (Chaplain) on Jun 18, 2001 at 04:38 UTC
    HeHeHe..... this is very funny because someone just posted code on how to lock up someones mouse and keyboard }-)