But your method doesn't eliminate the possibility of missing lines. You still have a race condition. What if your log got 20 lines apppended to it during that 60 second sleep? You'd only see the last line.
It sounds to me like you want to try to emulate 'tail -f' on your log files and then continuously cycle through them (perhaps in a multi-threaded fashion) looking for updates. I agree with the previous poster, though that a one to one relationship between processes and log files isn't such a good idea. I would suggest building a system where the number of children is configurable, and they all share the job of scanning your logs. Each time a child comes to a log it would need to pick up on the filehandle where it left off and scan till it gets to the end, then move on to whatever log it should do next.
In reply to Re: Re: Creating a Deamonized Log Scraper
by ehdonhon
in thread Creating a Deamonized Log Scraper
by quasimojo321
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |