in reply to writing to the top of a file

I think broquaint's solution will prove to be the most efficient, and will tend to impose the smallest load on resources. Of course, if the file gets really big, it'll take longer to re-write the whole thing each time you add a line or few at the top -- but the growth in delay will be relatively minor and nothing else will blow up, because you're not trying to hold the whole thing in memory (which is what would happen with coec's suggestion for using Tie::File. (There might be a way to use Tie::File that wouldn't involve holding the entire file contents in memory, but broquaint's approach is just easier.)

Apart from that, you might consider looking at the problem a different way: keep the file i/o simple (always append new text at the end of the file, the way God intended), and just change how you read and manipulate the file data for display.

If the people reading the display are only interested in the most recent content, you might decide that they won't want/need to look any farther than the "N" most recent lines. If you have the "tail" utility (everybody should have this by now), why not use it, since it was created to do just what you want (mostly):

my @latest = reverse( `tail -$n $datafile` ); # @latest has the last $n lines from $datafile, last line first...
If you're offended by the use of backticks, you might consider estimating how many bytes would likely cover $n lines, seek to that many bytes from the end of the file, and then read to the end of the file in the usual way assigning lines to array elements, and reverse the array; the oldest line might be just a fragment, but you could just ignore that one. (And $n could even be user-specified.)

(update: As usual, BrowserUK has provided a more sensible and effective approach. I'd follow his advice.)

Replies are listed 'Best First'.
Re: Re: writing to the top of a file
by reTard (Sexton) on May 20, 2004 at 03:55 UTC
    Being retarded I'll ask a silly question or two:

    1) as coec said what about multiple instances of the script running. How best stop data loss? Is Perl's advisory locking flock sufficient?

    2) if the file gets really large, broquaints solution could (potentially) fill a file ssytem resouce.

    I am very new to Perl Monks and Perl and most other things IT. Just curious about these issues.

    retard

      1. Every time this issue comes up, I tend to favor using a semaphore file. Provided that we're talking about a file that is only updated by perl processes, and that all these processes follow the same procedure in terms of "asking" for access to update the file, then there's no problem. (Find an example of a semaphore file module here, which includes a reference to a very good article on locking any shared resource.)

      2. If the file size happens to be equal to or greater than the available free space on a drive, any solution for trying to "prepend" new data at the top of a file will overfill the file system, because you need to write the new file before you can delete the old one. That's a very good reason to avoid trying to update a file this way. Appending to a file will only fail if the amount of new data being added exceeds the amount of available free space on the drive.

      You're wise to be curious about these issues.