Not really what I wanted to hear, but you're right - it would work :-/ The advantage of using a fifo is that as lines are read from the fifo, they are deleted - so the disadvantage of using a real file on disk to write to and read from is the *reader* having to delete lines it's read, or keeping a pointer in the file of last line read or something.
Michael, the project I'm working on is logging for a cluster - 2 or more nodes in a cluster, each one writing cluster log entries to syslog. I'm really trying to figure out how to have all cluster nodes log to a central log. Using standard syslog, the only ways I've seen it described is by having the central log on a server node that is NOT a cluster node - separate from the cluster, like this:
cluster node /etc/syslog.conf:
local4.* @my_logserver.my_company.com
log server my_logserver.my_company.com machine
/etc/syslog.conf:
local4.* |/var/log/cluster.log
But I'm looking for a way that will not require a separate log server machine. Have you had experience with anything similar? | [reply] [d/l] |
What OS? I think solaris supports multiple remote log servers separated by comas, so you would list Servera,Serverb on both and then both cluster machines would get a copy of all the log messages. (although they would not be in the same order necessarily). Also depending on the type of OS and the type of cluster the machines share hard disk, in that case just have both machines syslog configured to drop to the same file on the shared disk.
| [reply] |