kardster has asked for the wisdom of the Perl Monks concerning the following question:

I have an application that will run for months unattended. I want to be able to keep a log file that can be checked for possible problems, but don't want it to grow unbounded. Therefore, I'd like to limit its size to a maximum such as 1 MB.

Should I use the truncate function after flushing new log data? Does truncate delete the oldest or newest data in the file?

Is there a different mechanism I should use?

Replies are listed 'Best First'.
Re: Limiting log file size
by RMGir (Prior) on Apr 02, 2002 at 17:26 UTC
    You might want to look at Logfile::Rotate.

    It doesn't seem to support your 1M cap idea, but if you add some checking for the size exceeding your limits, its "rotate" method will handle moving off the old log file, and optionally compressing it for you.
    --
    Mike

•Re: Limiting log file size
by merlyn (Sage) on Apr 02, 2002 at 17:31 UTC
    Well, as a first start, I'd try something like this:
    #!/usr/bin/perl my $FILE = "the_log_file"; my $MAXSIZE = 2**20; # 1 meg exit 0 unless -s $FILE > $MAXSIZE; @ARGV = $FILE; # set diamond to file undef $/; # slurp mode $^I = ""; # enable in-place editing while (<>) { substr($_, 0, $MAXSIZE - length) = ""; # save last portion s/.*\n//; # toss one line so it comes out at a line boundary print; }

    -- Randal L. Schwartz, Perl hacker

      undef $/; # slurp mode
      $^I = ""; # enable in-place editing

      This sounds like a nice job for Tie::File.

      #!/usr/bin/perl -w use Tie::File; use strict; my $max = 512; my $file = 'syslog'; tie my @lines, 'Tie::File', $file; my $size = 0; for my $line (reverse -@lines .. -1) { if (($size += length $lines[$line]) > $max) { splice @lines, 0, (@lines + $line + 1); last; } } untie @lines;
      (Didn't use negative splice length, because Tie::Splice seems to be unable to handle it.)

      U28geW91IGNhbiBhbGwgcm90MTMgY
      W5kIHBhY2soKS4gQnV0IGRvIHlvdS
      ByZWNvZ25pc2UgQmFzZTY0IHdoZW4
      geW91IHNlZSBpdD8gIC0tIEp1ZXJk
      

Re: Limiting log file size
by Fletch (Bishop) on Apr 02, 2002 at 17:28 UTC

    Set up your program so that when it receives a signal (SIGHUP or SIGUSR1 for example) it sets a flag. In the course of the main loop, if the flag is set call a routine which closes and reopens your log filehandle. Then you can just use something like logrotate to do the rotation.

    Another possibility would be to use your platform's syslog facility and let whatever handles rotating those log files deal with things

Re: Limiting log file size
by perlplexer (Hermit) on Apr 02, 2002 at 19:46 UTC
    # NOTE: this code is untested chopf('/path/to/log.dat', 1000000); sub chopf{ my ($file, $size) = @_; local *HEAD; local *TAIL; return 1 unless -s $file > $size; return 0 unless open HEAD, "+<$file"; unless (open TAIL, "+<$file"){ close HEAD; return 0; } seek TAIL, -s($file) - $size, 0; <TAIL>; print HEAD $_ while (<TAIL>); truncate HEAD, tell(HEAD); close HEAD; close TAIL; return 1; }

    --perlplexer
      OK,
      seek TAIL, -s($file) - $size, 0; needs to be seek TAIL, (-s $file) - $size, 0;

      --perlplexer
Re: Limiting log file size
by mla (Beadle) on Apr 02, 2002 at 19:06 UTC
    I'd look at using logrotate which may already be installed on your system.
    It's used by many (most?) Linux distributions to rotate the system logs and is very flexible.