in reply to Re: Building a web-based system administration interface in Perl
in thread Building a web-based system administration interface in Perl

That's what "taint mode" is about, right? :)

Seems reasonable.

I still use mainly perl 5.8.8, but I don't plan any compatibility with 5.6 or older. It may works of course, but I won't even try.

That's natural. It will be somewhat self-contained. However It will use CPAN modules and not some strange weirdo version packaged inside like webmin does.

Mostly. I don't plan using anything but perl, but it will be using XS modules.

Sorry, no. I won't buy this one. There are too many fine webservers (even Perl ones). It won't be specific to any, but it won't carry one. For a start we'll work with SSL Apache2... I don't see the point of embedding a webserver, except maybe at a later stage for an easier quick and dirty deployment (but I'm too wary of "quick n' dirty" that lasts forever).

Yes, this is one of the first planned features :)

My main target right now is Linux. I don't see any reason why it wouldn't work on any distribution. For BSDs and Solaris, it will probably work but some modules (file sharing, network configuration, etc) will need an OS specific rewrite. However I'd rather have different modules for different OSes than "webmin style" modules, because it will make the code much simpler and cleaner.

Yes, this one is extremely important. We'll need an easy API to ease the way of anyone wanting to write a module. eBox is extremely good so I'll copy them shamelessly :)

Well I'll do my best :)

Right now it's still a matter of discussion with my team. I'm absolutely with you, but it's clear that making AJAX UIs that works flawlessly in links is a challenge. I still don't know how we'll sort this out; making a good, modern and easy to use interface is also a top priority.

The interface should ALWAYS run as unprivileged user. A small, separate process should do the privileged work, and that process should not communicate with the browser. And that process is not invoked via the shell, but directly (i.e. NO system "command and parameters"), just to avoid nasty shell surprises.

I'm eager to ear more for this one :) Right now I think the easiest thing to do is to use a "sudo" module, like eBox, but limits the unprivileged user (in /etc/sudoers) to execute one particular perl program, which will carry the privileged chores through an API (that limits somewhat what can and can't be done). Any other suggestion welcome...

  • Comment on Re^2: Building a web-based system administration interface in Perl

Replies are listed 'Best First'.
Re^3: Building a web-based system administration interface in Perl
by afoken (Chancellor) on Apr 08, 2009 at 21:49 UTC
    • Taint mode is just the first step. Each parameter should be validated as strictly as possible. The API could support this by requiring a "description" of the expected parameters to get access to the (validated) parameters. Data::FormValidator is a good start, but it has some legacy problems that I solved in my last project by writing a cleaner re-implementation (that is unfortunately not available to the public).
    • Perl 5.6 or even perl 5.005 should be no big problem unless you need Unicode. In that case, the minimum requirement should be perl 5.8.1.
    • "Own webserver" does not mean that you should write the 10.000th implementation of the server side of the HTTP protocol. But the HTTP server for the tool should be independant of any other parts of the system. Especially on a webserver machine, you don't want to mix public web access and system configuration. You would depend on the security of the public web server, misconfiguring that server would disable your tool, and especially the Apache web server has far too many features that could disturb your tool or give the public full access to the machine. I don't propose an embedded web server. The Web server could (and should) live in a completely independant process. It can be a very small and simple server, as it does not have to handle big loads, it should be able to deliver some static content (images, CSS, JS), and it should be able to pass requests to perl, using CGI, FastCGI or perhaps an equivalent of mod_perl. CGI and FastCGI have the advantage of an additional separation of web server code and application code. A fatal error in the application does not kill the web server, just the application process.
    • OS compatibility: Just don't expect any system to behave like a linux system, or a specific linux distribution, and most things should just work. If you need external tools, don't expect them to work like GNU tools. Don't expect them to be in /usr/bin, don't expect the default shell to be a recent version of bash. Using different plugins for different OSes is a good idea, but for components like an Apache configurator, one tool should be able to handle all OSes. That means that pluigins need a configuration space, e.g. to set the location of apachectl and httpd.conf.
    • AJAX is nice for some enhancements, like auto-completion or maybe a pure-Javascript terminal. Restrict yourself to use Javascript only as an enhancement, not for basic functions, and you will get a "modern" interface which still works nicely in links, lynx, NN3, TV internet set-top boxes, and old smartphones. Valid and semantic (X)HTML and valid CSS would also be good for this purpose.
    • sudo would also be my first tool for the privileged helper process, but as soon as we leave modern Linux distributions, sudo may disappear. On some distributions, sudo is still optional today. During the installation, the user would have to add rules to allow the unprivileged user to execute the privileged helper process. I think that a classic suid wrapper is easier to install. Unfortunately, this requires writing secure and paranoid C code. The Apache suexec wrapper security model could be a good starting point. You don't want any user to be able to edit /etc/passwd using the wrapper, do you?

    The privileged process should do as few tasks as possible. It should not start parsing third party configuration files, or rewrite tons of XML files. For some tasks, like reading and writing configuration files, you could get away with a stripped-down version of cat, chmod, chown, mv or cp, plus the paranoia code based on suexec. But when you want to start a service, the privileged process needs to execute an arbitary program with arbitary parameters as root, like apachectl, exim, smbd, nmbd, sshd, inetd, or the svc tool from daemontools. It would still run all paranoia checks, but it would give the tool full root access without requiring a password. Still better than running everything as root.

    Alexander

    --
    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
      • Taint mode is just the first step. Each parameter should be validated as strictly as possible.

      I've been thinking more about it. The main problem I see is that by definition, the administrative interface has to be able to do almost everything : modify base system files (/etc/hosts, /etc/passwd, /etc/resolv.conf etc.), start and stop daemons, run many different programs, partition drives, etc. That makes "whitelisting" command and parameters practically unmanageable.

      • Perl 5.6 or even perl 5.005 should be no big problem unless you need Unicode. In that case, the minimum requirement should be perl 5.8.1.

      Yes, however this is extremely low on my priority list. My main priority is to get something that works for common Linux systems, is clean and easy to extend. Other OSes come second; ancient stuff like Solaris 7 or IRIX I'll let to others :)

      • "Own webserver" does not mean that you should write the 10.000th implementation of the server side of the HTTP protocol. But the HTTP server for the tool should be independant of any other parts of the system.

      I want to make it webserver agnostic (this isn't hard, anyway). I plan to run the existing webserver ( apache for instance) with a special user and configuration. That's simple, and you can use any other webserver simply by using a different startup file in /etc/init.d/ or equivalent.

      • OS compatibility: Just don't expect any system to behave like a linux system, or a specific linux distribution, and most things should just work. If you need external tools, don't expect them to work like GNU tools. Don't expect them to be in /usr/bin, don't expect the default shell to be a recent version of bash. Using different plugins for different OSes is a good idea, but for components like an Apache configurator, one tool should be able to handle all OSes. That means that pluigins need a configuration space, e.g. to set the location of apachectl and httpd.conf.

      I'll be pragmatic. I'll stick as much as possible to Perl internals and core modules. However for external commands and shell, I'll try to stick to common basics. If necessary, I'll use bash or GNU core tools because they're readily available for any modern OS anyway. Of course, every module needs to have a site-specific configuration file.

      • AJAX is nice for some enhancements, like auto-completion or maybe a pure-Javascript terminal. Restrict yourself to use Javascript only as an enhancement, not for basic functions, and you will get a "modern" interface which still works nicely in links, lynx, NN3, TV internet set-top boxes, and old smartphones. Valid and semantic (X)HTML and valid CSS would also be good for this purpose.

      Usability + ease of use are first in the prerequisites. If necessary, modules may need an AJAX and a pure HTML version.

      • sudo would also be my first tool for the privileged helper process, but as soon as we leave modern Linux distributions, sudo may disappear. On some distributions, sudo is still optional today.

      Optional still means available. I don't see the point reinventing this wheel, I already have a carriage to build :)

        I've been thinking more about it. The main problem I see is that by definition, the administrative interface has to be able to do almost everything : modify base system files (/etc/hosts, /etc/passwd, /etc/resolv.conf etc.), start and stop daemons, run many different programs, partition drives, etc. That makes "whitelisting" command and parameters practically unmanageable.

        Whitelisting is the ONLY way that works without opening BIG gapping security holes. It seems you want to check incoming parameters GLOBALLY. That can't work. Once you know which routine will handle the request, you also know how the parameters have to be validated. My last project had a very simple approach to that problem: The request handler routine had a companion routine that returned a hash reference containing all data required for the validation (Data::FormValidator calls that a profile). Essentially, the main routine first decides which routine will handle the request, then it finds the validation profile by calling <reqhandler>_profile(), validates the parameters, and finally calls the real request handler routine <reqhandler>(). All with whitelists, all secure. And by the way: You DO NOT want to pass the name of the file you want to change with root rights in the form parameters, do you?

        I want to make it webserver agnostic (this isn't hard, anyway). I plan to run the existing webserver ( apache for instance) with a special user and configuration. That's simple, and you can use any other webserver simply by using a different startup file in /etc/init.d/ or equivalent.

        This will limit you to CGI mode. Combined with AJAX, this will cause a nice load on your server.

        If necessary, modules may need an AJAX and a pure HTML version.

        Useless work, you need only one version. Make the code work with pure HTML, and add Javascript (with or without AJAX) for the nice look and feel.

        Alexander

        --
        Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)