in reply to Removing lines from files

Just replace this:
tie my @file, 'Tie::File', $File::Find::name or die "Can't + tie $File::Find::name $!"; splice @file, 0, 4; untie @file;
with:
open(F, '<', $File::Find::name) or die "..."; my $tmp = $File::Find::name . " - new"; # see comment below open(G, '>', $tmp) or die "..."; for (1..4) { <F> }; # don't copy the first four lines while (<F>) { print G } close(G); close(F); rename($tmp, $File::Find::name) or warn "unable to replace $File::Find::name: $!\n";
The only caveat is that you have to ensure that $tmp can never be the name of an existing log file.

The tie method is inefficient because it reads the entire file into memory.

One advantage that this approach has over an in-place re-write is that you won't have to worry about leaving yourself with a corrupted log file if the copy is interrupted.

Replies are listed 'Best First'.
Re^2: Removing lines from files
by Cristoforo (Curate) on May 09, 2008 at 20:09 UTC
    In a PDF found here, "Lightweight Database Techniques" tutorial materials freed, Dominus says Tie::File is for convenience, not performance, but he also says its reasonably fast. The job you're doing here is removing the first 4 lines of every (200,000) files. Tie::File will have to rewrite every large file from the point beyond the change to the end. He says since the module must perform reasonable well for many different types of applications, its slower than code custom written for a single application.