in reply to Re: How do I backtrack while reading a file line-by-line?
in thread How do I backtrack while reading a file line-by-line?

That section on memory usage is very misleading. Tie::File keeps the index of every encountered lines (i.e. every lines up to the highest one read/written) in memory. In other words, if you do $tied[-1] or push @tied, ..., the index of every line in the file is loaded into memory (if they haven't already been loaded).

Tie::File is still a very useful module.

Replies are listed 'Best First'.
Re^3: How do I backtrack while reading a file line-by-line?
by grep (Monsignor) on Oct 13, 2006 at 21:26 UTC
    from the POD:
    memory - This is an upper limit on the amount of memory that Tie::File will consume at any time while managing the file. This is used for two things: managing the read cache and managing the deferred write buffer

    I didn't find that misleading. It says to me that only chunks of the file data are loaded into memory. In fact, I assumed that it loaded a full index of the lines at instantiation.

    If the OP knows about how much data an average (or the largest) backtrack is, the read cache could optimized for memory usage/speed. Plus you get a layer of abstraction to hide any nastiness.



    grep
    One dead unjugged rabbit fish later

      I didn't find that misleading.

      "[The memory parameter] is an upper limit on the amount of memory that Tie::File will consume at any time while managing the file" is a false statement. Tie::File's memory usage is unbouded. The docs do specify an exception, but it's very misleading.

      The memory value is not an absolute or exact limit on the memory used. Tie::File objects contains some structures besides the read cache and the deferred write buffer, whose sizes are not charged against memory.

      Does that give the impression that Tie::File's memory usage is unbounded? If not, then the docs are misleading.