in reply to Using +> for File Read/Write

By opening a file you get a cursor into it. It’s the thing you move with seek.

When you read, say, 20 bytes from the file, the cursor moves forward 20 bytes, so the next read will return the next part.

When you write, say, 20 bytes to the file, the cursor moves forward 20 bytes, so the next write will write the next part. If the there was already something past where the cursor was pointing, and you write 20 bytes at that position, then 20 bytes of previous content get overwritten.

That much should be pretty clear; according to what you say, you already understand that.

Now, there’s no reason you need to always write or always read. You can read 20 bytes, then write 20 bytes; the cursor will now be before the 40th byte, waiting for your next action. Or you can seek around wildly, reading here, writing there, doing whatever wherever.

That’s all there is to it.

Still confused?

Makeshifts last the longest.

Replies are listed 'Best First'.
Re^2: Using +> for File Read/Write
by slloyd (Hermit) on Nov 01, 2005 at 21:15 UTC
    I understand about the pointer. I just have not had a reason to use "+>" yet. Do you have any example scripts that manifest when this would be useful. I have been coding in Perl for over 10 years and have yet to use it.

      For +>, I actually can’t think of anything off hand, but +< is quite useful when you keep state in a file.

      For a simple example, think of one of those visitor counter CGI scripts. You could use this to keep the counter in a simple file. When the script starts, it opens the file for writable reading; then it acquires an exclusive lock on the file; then it reads the counter, increases it by 1, seeks back to the start of the file, writes the new the value, and truncates the file; then it closes the file (thus releasing the lock).

      Another use is databases, particularly files with fixed-length records; also DBM files, or the like.

      Makeshifts last the longest.

      One possible use is for very large datasets that remain the same overall size but with the content changing regularly.

      The architypical example, though it is doubtful is would be written in Perl, are OS swapfiles. The contents of the swap file is random accessed and changes all the time, but it is useful to reuse the same patch of diskspace each time. By leaving the swapfile (and whatever it has in it) in place when shutting down, and reusing it with '+>' on startup, the filesystem doesn't have to reallocate space from the freespace chain anew each time. That allows the semi-permanent swapfile to be fully defragged, resulting in a single, contiguous allocation that is re-used time after time with consequent optimisation of performance, and reduction in overall fragmentation of the diskspace.

      For applications that might benefit from this in Perl. If you regularly download/import large files--(say log files) that are different each time, but roughly the same length each time--for analysis or uploading to a database, then retaining the old files and overwriting them using '+>' each time. Rather than deleting them when finished with, and the having to find and chain sufficient space on the disk each time, could benefit you in the same ways.

      This especially if you ensure that the space is fully defragged, or you use a tool to pre-allocate the required contiguous space before downloading.

      It also ensures that when the space is needed, it is available, and prevents long downloads getting 99% of the way done and then aborting through a lack of diskspace.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.