in reply to Viewing log files on remote servers.

I'd like to look at this from a different angle.

I am a big fan of syslog servers. Therefore I assume you are talking about files that are generated by syslog (which excludes Apache for instance. Other applications, such as Samba, can be compiled to use syslog instead of its internal logging mechanism).

You set up a single host with a lot of disk space (I generate about 500Mb of logs per day), set up syslogd on it to accept network connections and configure the rest of your servers to log to it. The first win is that you have moved logs off all your other servers. Less worry about running out of space on /var, and should crackers crack a server they can't easily cover up their tracks, because the log files aren't around for them to diddle. Which reminds me, remote logging is the only use I can see for wanting to use the --MARK-- thing in log files: it serves a heartbeat to let you know your other systems are still ticking over.

But I digress. What you have now is a single set of logfiles on one machine. One that will hopefully rotate and expire log files just the way you like. Your task is now to write one set of scripts to deal with them. Try logwatch and see if that doesn't meet your reporting needs. Otherwise File::Backwards will do the trick of tailing a file quite nicely.

In any event, if you centralise your logs in one place you make life much easier for yourself in terms of both collection and analysis. I strongly recommend you consider this approach. All that telnet and ftp stuff sounds like cruft waiting to be happen.

I don't think your circumstances are unique, I think it's been done before, many times...

<update>I would advocate that your in-house programs be modified to use syslog. It's a snap to do in C, and a number of modules exist for Perl (Unix::Syslog, Sys::Syslog and Net::Syslog). Seriously though, your telnet solution is just not gonna fly. Writing an app to run on 70 servers (even if they're all Linux or Solaris or whatever) and making it work correctly is no fun. You'll be forever coding around the warts and cruft that has built up over the years. You'll expend all your energy just to stay in one place. It will be a bear to deploy, teaching people how to use will suck up all your time, and in the end it will be ignored. Build the infrastructure and people will have an incentive to make their programs work with it. Remember, let laziness be your guiding light.</update>


print@_{sort keys %_},$/if%_=split//,'= & *a?b:e\f/h^h!j+n,o@o;r$s-t%t#u'

Replies are listed 'Best First'.
Re: Re: Viewing log files on remote servers.
by gnu@perl (Pilgrim) on Oct 08, 2002 at 21:26 UTC
    Actually it is kind of unique. The idea is that different users can have access to different servers. The log files are not all generated by syslog, most of them are generated by in house programs.

    Also, I have no control over what servers may be added or removed. If a server is added I cannot guarantee that it would be configured for centralized logging.

    With the method I have planned, any user with telnet access to a server could watch any log files that their login allows them to see. I could distribute the app among users and not have to worry (relatively) about who sees what data on what servers. We have over 70 servers here not to mention all of the workstations (probably over 400) distributed across our WAN.

    I do agree with you that centralization has it place, unfortunately this does not seem to be one of them. If I were administering a web farm or email farm that might make a lot more sense, but most of the servers are stand alone with varying access and applications.