hmerrill has asked for the wisdom of the Perl Monks concerning the following question:

I'm trying to find a way on Red Hat Linux 9 to specify in /etc/syslog.conf a pipe command such that log entries for local4 get piped to a perl script that then inserts the log entries into a MySQL table. I've worked it out so I can use fifo's (named pipes) like the 'syslog.conf' manpage describes:
1. mkfifo /tmp/my_log_fifo 2. In /etc/syslog.conf: local4.* |/tmp/my_log_fifo
so after restarting syslogd, log entries for local4 get written to fifo /tmp/my_log_fifo. Then I have my perl script read lines from the fifo file, and everything works fine.

But in my testing, with data *in* the fifo file, I rebooted my machine, and when it came back up, the fifo file had a size of 0, meaning the data in the fifo was lost on the reboot. I cannot afford to lose log entries in this application, so it would seem that using a fifo for this purpose would not work.

So my questions are:
1. has anyone done this successfully, and if so, how? 2. am I missing something, or is there a way to make fifo's persistent for reboots? 2. is it possible to code a perl script such that the perl script can be specified right in the /etc/syslog.conf, like local4.* |/path/to/my_perl_script.pl in place of the fifo file?
Any help or ideas are greatly appreciated.

TIA.

Replies are listed 'Best First'.
Re: insert syslog log entries into MySQL?
by Abigail-II (Bishop) on Oct 20, 2003 at 16:06 UTC
    As for 2, the answer is no. If you want persistency, use files (and even then there's still the possibility of data loss). The data "waiting" in a pipe is stored in memory - that's why pipes are faster than files.

    As for 3., check your manual of syslog.conf. If it's possible on RH 9, it'll be in the manual. (This is no longer a Perl issue).

    But I'm a bit surprised you're suffering from data loss. Sure, if a box crashes, last minute reports could be lost (even before syslogd gets the chance to figure out where it goes), but under normal conditions, this should be rare, as reading from a pipe should be pretty fast. You are continuously reading from the pipe, aren't you?

    Abigail

      Ok, pipes and persistence don't go together. But you might be right - maybe the risk of data loss is small enough not to worry about :) I'll have to give this some thought.

      I've read just about everything I can find on syslog including the manpages. I only presented question #3 because I saw an example (that I couldn't get working) of a person who wrote a C program and that's how he described getting it to work in the syslog.conf file - piping the syslog messages to his program. I'm not a C programmer so I don't know what's special about the way he did that to get it to work.

      Thanks!
Re: insert syslog log entries into MySQL?
by mpeppler (Vicar) on Oct 20, 2003 at 16:00 UTC
    If you really don't want to risk losing any data I'd suggest having syslogd write to a real file, and then have a script that reads that file and pushes data to the database. It's a low-tech solution, but it works :-)

    Michael

      Not really what I wanted to hear, but you're right - it would work :-/ The advantage of using a fifo is that as lines are read from the fifo, they are deleted - so the disadvantage of using a real file on disk to write to and read from is the *reader* having to delete lines it's read, or keeping a pointer in the file of last line read or something.

      Michael, the project I'm working on is logging for a cluster - 2 or more nodes in a cluster, each one writing cluster log entries to syslog. I'm really trying to figure out how to have all cluster nodes log to a central log. Using standard syslog, the only ways I've seen it described is by having the central log on a server node that is NOT a cluster node - separate from the cluster, like this:
      cluster node /etc/syslog.conf: local4.* @my_logserver.my_company.com log server my_logserver.my_company.com machine /etc/syslog.conf: local4.* |/var/log/cluster.log
      But I'm looking for a way that will not require a separate log server machine. Have you had experience with anything similar?
        What OS? I think solaris supports multiple remote log servers separated by comas, so you would list Servera,Serverb on both and then both cluster machines would get a copy of all the log messages. (although they would not be in the same order necessarily). Also depending on the type of OS and the type of cluster the machines share hard disk, in that case just have both machines syslog configured to drop to the same file on the shared disk.


        -Waswas