in reply to Removing lines from files

Have you tried doing
shift @file for 1..4;
instead of the "splice" ? Not sure if that would help, but it might. Update: It does not. See pc88mxer's reply below.

Why do you use a "foreach" loop over a scalar (foreach ( $File::Find::name )) ? It seems that $_ is not even referenced. in that loop.

Although "Tie::File" claims to be efficient, you are not using it for "random, array-style access", so the overhead may be too high for your case. Benchmark can help find more optimal mechanisms.

++ on using modules to reduce the amount you are coding ! (Although this may appear contrary to my previous sentence).

Update 1: pc88mxer : The Tie::File docs claim that it does NOT read the file into memory. Also, Your method (re-writing the relevant part of the file) is supposed to be LESS efficient than claimed by Tie::File. I will attempt to benchmark & post here.

     "How many times do I have to tell you again and again .. not to be repetitive?"

Replies are listed 'Best First'.
Re^2: Removing lines from files
by pc88mxer (Vicar) on May 09, 2008 at 19:39 UTC
    A check of the source code reveals that the SHIFT method for Tie::File is implemented in terms of the SPLICE method:
    sub SHIFT { my $self = shift; scalar $self->SPLICE(0, 1); }
    Besides, the file is updated after every shift operation which means you'd be re-writing the file four times (!)

    I don't think Tie::File is the right approach to manipulate 700 MB log files.

    Update: I have a feeling the OP is running into out of memory problems because Tie::File is keeping track of the start of each line even though it doesn't need to. For a multi-megabyte file this clearly would be a problem. However, this is currently just a conjecture.