Assuming the later, you could remember the first line of a file, and the number of the line you last processed. When you fetch the file, you examine it. If it's the same as the line you saved, the file has not been rotated yet, so you know how many lines to skip without parsing - (that's the second value you remembered). If it's not the same, the file has been rotated and you restart the "lines already seen" counter to zero.
Having said that, fetching the webserver logs every minute seems extraordinarily wastefull. The traffic from transfering the logs would soon become a significant chunk of the total site traffic. If you need the up-to-the-minute information, you are much better off talking with whoever controls the server, and arranging for a small monitoring program to read the log file as it is written (perhaps using File::Tail), process as much information as possible, and forward that information to the script on your server.
In reply to Re: Downloading and parsing apache logs
by matija
in thread Downloading and parsing apache logs
by juanmarcosmoren
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |