I think there is a larger issue here, and that is the current approach you are taking could seriously impact the resources on the system you run it on. If you fork once for each file you are trying to monitor, and you truly end up monitoring in excess of 120 such files, you will run into overhead problems.
As a first step, I would recommend that you structure the program around the regular polling of the files for actual changes in the modification time. If all you do is continually look at the last line every n minutes, do you sent multiple mail alerts for the same line merely because there hasn't been a new line in the last n + 1 minutes?
Secondly, the only real need for forking is to prevent the sending of mail from stopping the main program from continuing. To this end, once you have a truly new line of output from one of the logs, check it against the pattern within the main program. If it warrants an e-mail alert, have the subroutine that sends the alert do the forking. Plus, there is no need for sleeping after the mail (unless this was part of the original design to wait a certain period before polling the logfile again). If the forked child process is only responsible for sending the e-mail, it can use exec() rather than system(). You will also want to do something with the CHLD signal in the parent process.
Getting back to the main point, I don't think you really want a one-to-one mapping of process-to-logfile, unless the machine running these processes is dedicated to just those tasks. This isn't a trivial task, especially not with the number of files you expect to be monitoring simultaneously. It is well worth the effort of taking some time to plan it through, and carefully.
--rjray
In reply to Re: Creating a Deamonized Log Scraper
by rjray
in thread Creating a Deamonized Log Scraper
by quasimojo321
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |