in reply to Logfile parsing across redundant files
Perhaps you could simply keep track of the file position of the last log entry you've processed, and then process all entries found after that. If the file position is smaller than the previous day, then take all the lines. Something like (untested top-of-the-head) this:
--roboticus# Read logfile names with last position read yesterday open $inf, "<", "loglist.txt" or die; while (<$inf>) { chomp; my ($fname, $fpos) = split /\|/; $logs{$fname} = $fpos; } close $inf; # Get new lines from each file for my $fname (keys %logs) { open $ouf, '>>', $fname . ".cumulative" or die; open $inf, '<', $fname or die; if ($logs{$fname} < stat($inf)[7]) { # Continue from where we left off yesterday seek $inf, $logs{$fname}, SEEK_SET; } else { # start at beginning of file } while (<$inf>) { print $ouf $_; } $logs{$fname} = tell $inf; close $inf; close $ouf; } # Rewrite list of files and positions open $ouf, ">", "loglist.txt" or die; print $ouf join "\n", map { $_ . '|' . $logs{$_} } keys %logs; close $ouf;
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Logfile parsing across redundant files
by BrowserUk (Patriarch) on Feb 02, 2007 at 13:33 UTC | |
by roboticus (Chancellor) on Feb 02, 2007 at 22:37 UTC |