My Perl code produces some log files that can grow to be pretty large. Right now, I am manually deleting them occasionally, but what I would prefer is some automatic routine that I could call that would inspect the size of a log file and chop off a certain amount at the *beginning* of the file (oldest info) to keep it within desired size limits. In other words, I really do not want to delete the log file, just get rid of enough info at the beginning of the file to keep the size under control.
I am thinking that I will need two size values, the "trigger" file size and the chopped down size limit... For example, 1MB and 500KB, respectively. If I only use one value, then I will be running the routine too frequently... (using two values is a performance tweak)
I am thinking I will check the file size (not sure how to do this in Perl), detect if it is over the "trigger" size, then determine the difference between actual size and the chopped down size. I am thinking I would use that difference as a file seek value to locate the file pointer in the file being trimmed down. However, as I am not sure if I can delete on the front end of a file, I am thinking I will need to basically find the first end of line character after the seek point (text files--do not want partial lines), then open a new file and basically make a copy of the information from after the end of line into the new file. After that I can delete the old log file. Still debating on whether I should try to set the time stamp of the new log file to be the same as the old log file. (Don't know if Perl allows for manipulating file timestamps.)
So... I am looking for some feedback and input on my approach. I am sure others would probably benefit from this also. Am I making this too complicated? Is there a simpler approach?
In reply to Limiting the size of log files... by SteveTheTechie
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |