I'd unshift the new records onto @logfile and print the first hundred. What you have should work fine except for printing 102 records, one extra from off-by-one in 0..100, and the other from the added new record. Here goes:
use Fcntl qw( :flock ); open (LOG, "+< /path/to/log.dat") or die $!; flock(LOG, LOCK_EX); my @logfile = <LOG>; unshift @logfile, "$date - $requesturi - ($httpuseragent)" . " $remorehost: $remoteport ($remoteaddr) -" . " $httpreferer\n"; seek LOG, 0, 0; print LOG @logfile[0..99] or die $!; close(LOG) or die $!;
I used a single nontruncating open for read/write so that the lock holds through the read and write cycle. That's so another instance can't leap in with a record to be lost before you write.
I also cleaned up some niceties, like checking for system errors and using Fcntl constants for locking.
Is there some reason you don't want to simply append to the logfile? That would be more normal.
After Compline,
Zaxo
In reply to Re: How do I trauncate a flat file, then append to front?
by Zaxo
in thread How do I truncate a flat file, then append to front?
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |