Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Bit stuck here, I've written a bit of code that tails a log file looking for a defined event. This works fine until the log file gets rotated, then, of course, it stops working as the original file I was looking @ has been renamed. Can't think of a elegant way to get around this, any ideas would be helpful. Heres the code I've got so far:
#!/usr/bin/perl use IO::Handle; use File::stat; $sleep = 1; # time to sleep between re-sampling file $counter = 0; while ( $_ = shift(@ARGV)) { if ( /^-f/ ) { $file = shift (@ARGV); } # log file to monitor if ( /^-m/ ) { $max = shift (@ARGV); } # ceiling limit to delt +a update rates if ( /^-r/ ) { $max_exceeds = shift (@ARGV); } # number of tim +es delta ceiling limit is ignored until being reported, optional if ( /^-l/ ) { $log = shift (@ARGV); } # log file to write to, + optional if ( /^-c/ ) { $config = shift (@ARGV); } # use config file } if ( $config ) { open (CFG, "$config") or die "Cannot open config file $config: +$!\n"; while (<CFG>) { chomp; s/#.//; s/^\s+//; s/\s+$//; next unless length; my ($var, $value) = split (/\s*=\s*/, $_, 2); no strict 'refs'; $$var = $value; } close (CFG); } unless ( $max ) { $max = 1500; } # default limit for delta unless ( $limit ) { $limit = 300; } # default to 5 minutes unless ( $max_exceeds ) { $max_exceeds = 5; } # default limit if ( $log ) { open (STDOUT, "> $log") or die "Cannot write to $log:$!\n"; STDOUT->autoflush(1); } if ( $verbose and $config ) { print "Using configuration file $config\ +n"; } if ( $verbose ) { print "delta limit set to $max updates per sample pe +riod (30 seconds)\ndelta exceed timer set to $ limit seconds\nmaximum number of exceeded delta's before reporting is +$max_exceeds\n"; } open (FILE, "$file") or die "Cannot open file $file: $!\n"; for (;;) { while (<FILE>) { if ( $_ =~ "Delta" ) { @line = split, $_; if ( $verbose ) { print "\nDelta found:\n@line +\n"; } $delta = $line[4]; if ( $delta >= $max ) { if ( $verbose ) { print "Current delta + $delta exceeding $max and counter is $counter \n"; } $now_time = time; # if start time not set or zero, set t +ime and alarm if ( ! $s_time or $s_time == 0 ) { $s_time = time; $alarm = $s_time + $limit; if ( $verbose ) { print "Alarm + set to $alarm, current time $s_time\n"; } $counter=0; } # if counter is greater than max_excee +ds and the current time is <= alarm elsif ( $counter >= $max_exceeds and $ +now_time <= $alarm ) { print "$delta exceeds threshol +d of $max $counter times\n"; $s_time=0; $alarm=0; $counter=0; } # if current time >= alarm reset some +variables back to 0, as alarm exceeded elsif ( $now_time >= $alarm ) { if ( $verbose ) { print "Alarm + timer exceeded.....resetting counter, time an d alarms\n"; } $counter=0; $alarm=0; $s_time=0; } else { $counter++; } } } } sleep $sleep; last if stat(*FILE)->nlink == 0; FILE->clearerr(); } close (FILE); if ( $log ) { close (STDOUT); }

Replies are listed 'Best First'.
•Re: Tailing rolling logs
by merlyn (Sage) on Jan 30, 2004 at 12:08 UTC
      Hi, yup, looked at File::Tail, didn't think it could work with rolling logs, unless I read the spec wrong. Anyway, no matter I think I found a way to check this using the inode of the file and the File::Stat module, added these lines to the main loop:
      if ( stat($file)->ino != $inode ) { # file has rolled if true close (FILE); open (FILE, "$file") or die "Cannot open file $file: $ +!\n"; $inode = stat(*FILE)->ino; }
      Obviously set $inode before entering the loop. Does seem to do the trick, but I need to do some more testing Thanks Jon
        Hi, yup, looked at File::Tail, didn't think it could work with rolling logs, unless I read the spec wrong.
        Did you miss this part?
        If the file does not get altered for a while, "File::Tail" gets suspicious and startschecking if the file was trun- cated, or moved and recreated. If anything like that had happened, "File::Tail" will quietly reopen the file, and continue reading. The only way to affect what happens on reopen is by setting the reset_tail parameter (see below). The effect of this is that the scripts need not be aware when the logfiles were rotated, they will just quietly work on.

        -- Randal L. Schwartz, Perl hacker
        Be sure to read my standard disclaimer if this is a reply.

      Sweet module!
      How do you know about all these "fairly obscure" modules? I've never seen File::Tail before - it's very nice!
Re: Tailing rolling logs
by coec (Chaplain) on Jan 30, 2004 at 10:16 UTC
    Hmm, how does 'it stop working'?
    Depending entirely on how the log rotation occurs, after the rotation you could be looking at the original file, it just has a .1 extension now.
    RedHat's logrotate, for example and depending on the options supplied, may move/create. That is, it will move the existing file to a new location (and name) and create a new file of the original name. Now, your running script (which I've called logr.pl for ease of reference) doesn't refer to the log file by name but by inode. A good way to test this (under Linux) is to start your script:
    logr.pl -f messages
    and get the process ID of logr.pl. 'ls -l /proc/<PID>/fd' and look at the file handles that are in use by your process. Now manually rotate the file and repeat the 'ls' above. In my tests, the logr.pl still had the original file open.
    Under other Unixes, 'lsof' or 'fuser' may provide similar info.
    As to a fix, you could include signal handling in logr.pl to re-read the file (or simply restart) on receipt of SIGHUP, for example.

    I hope that answers the question (correctly)

      Hi, Yup that is exactly what is happening, the file is being renamed to xyz.log.1, hence the issue. Your sugestion is valid but I'd prefer to have something that is intelligent enough to know that the filename has changed and then reopen the filehandler, as opposed to any outside help. Something along the lines of checking the name of the file that the inode refers to to verify that it is still the expected filename. Thanks for your suggestion Jon
        I have no experience with it, but how about trying file test operator
        -C
        which gives you
        'Age of file (at startup) in days since inode change.'
        I'm hoping for 'since inode change' ...