smithers has asked for the wisdom of the Perl Monks concerning the following question:
Issue: I need to expand my monitoring to include numerous remote servers -- some accessed via slow or bandwidth-impaired links. My problem is not the large remote log file, per se, as only a few new lines are appended hourly or daily. Rather my approach for extracting the new lines from the large log files seems to suck. My current logic to get new lines is:
1) if file modification date has changed, open file, count number of lines and close. 2) if newly-obtained line count differs from last line count, reopen file. 3) read past and ignore old lines. 4) read new lines and analyze patterns. 5) persist new file line count and mod date for next analysis.
This dual read (once for line count, another to get the new lines) is where all my script CPU and wall time is spent and I could obviously try to combine steps 1 - 4 into a single journey through the file. However, before I do that I thought I would ask for suggestions. Is there a better way to periodically extract the new lines from a log file? Again, with the constraint that I not deploy any scripts or perl distros to the local or remote servers where the logs reside?
Thanks for sharing any ideas you may have.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Extract new lines from log file
by BrowserUk (Patriarch) on Jan 02, 2007 at 19:55 UTC | |
|
Re: Extract new lines from log file
by ferreira (Chaplain) on Jan 02, 2007 at 20:00 UTC | |
|
Re: Extract new lines from log file
by smithers (Friar) on Jan 02, 2007 at 20:22 UTC | |
by smithers (Friar) on Jan 05, 2007 at 21:49 UTC | |
|
Re: Extract new lines from log file
by pileofrogs (Priest) on Jan 02, 2007 at 23:17 UTC |