in reply to Logfile parsing across redundant files

thezip:

Perhaps you could simply keep track of the file position of the last log entry you've processed, and then process all entries found after that. If the file position is smaller than the previous day, then take all the lines. Something like (untested top-of-the-head) this:

# Read logfile names with last position read yesterday open $inf, "<", "loglist.txt" or die; while (<$inf>) { chomp; my ($fname, $fpos) = split /\|/; $logs{$fname} = $fpos; } close $inf; # Get new lines from each file for my $fname (keys %logs) { open $ouf, '>>', $fname . ".cumulative" or die; open $inf, '<', $fname or die; if ($logs{$fname} < stat($inf)[7]) { # Continue from where we left off yesterday seek $inf, $logs{$fname}, SEEK_SET; } else { # start at beginning of file } while (<$inf>) { print $ouf $_; } $logs{$fname} = tell $inf; close $inf; close $ouf; } # Rewrite list of files and positions open $ouf, ">", "loglist.txt" or die; print $ouf join "\n", map { $_ . '|' . $logs{$_} } keys %logs; close $ouf;
--roboticus

Replies are listed 'Best First'.
Re^2: Logfile parsing across redundant files
by BrowserUk (Patriarch) on Feb 02, 2007 at 13:33 UTC
      BrowserUk:

      Good catch! I guess we'd have to add some code to cache the first line as well. If the first line is the same, do the same thing as above. If not, then cache the first line and take the whole file.

      --roboticus