in reply to How to do a manual "logrotate" on a given file?

From Mark Jason Dominus' Lightweight Databases talk:

tie @LOG, 'Tie::File', '/etc/logfile'; sub log { push @LOG, @_; my $overflow = @LOG - 100; splice @LOG, 0, $overflow if $overflow > 0; }

His solution involves tie-ing the log file to an array using Tie::File, which is a core module.

Replies are listed 'Best First'.
Re^2: How to do a manual "logrotate" on a given file?
by JavaFan (Canon) on Nov 24, 2009 at 15:19 UTC
    That doesn't work if the file is continuously being updated by another process (or processes), which is often the case with logfiles. It's only a solution if you are in full control over the single process that's updating the log file. Even if there are two instances of the same program log to the file, the given solution is going to fail.

      I think i give it a try afaik the "logfile" is produced at a given time once a day only so if its possible that the file isnt altered at that few seconds it is ok ... I think it might be a cron job, but i am searching for that ...

      Thanks in Advance MH
      I don't know much about concurrent access to files, but wouldn't locking the file solve this issue?

        Well, you would have to lock the file using the flock call of Tie::File. It will invalidate the cache, so it's not efficient. And then it will only work if the writers use flock as well (which not every writer actually needs to do; if all writers open the file for append, and writes are smaller than the buffer size (typically 8k), data won't be garbled).

        But I fail to see the point of Tie::File in the first place. Considering the OP just wants to move files, and doesn't express any wish to actually remove data, what's the point of reading in the file in the first place?