in reply to reading (caching?) large files

Be aware that your OS already implements a cache. The cache that you are making may interfere with the OS's.

I use fairly large files myself (among others tab-del), and I first try to read them into memory at once. Sometimes that is not possible, and than I try to minimalize the data in a row-read-write approach. As long as you use the proper functions, OS will take care of caching.

In cases where I need large amount of data available for lookup/search/sort, I use the BerkeleyDB. Very nice performance.

Hope this helps,

Jeroen
"We are not alone"(FZ)

Replies are listed 'Best First'.
Re: Re: reading (caching?) large files
by perchance (Monk) on Jun 05, 2001 at 17:11 UTC
    Sorry if I'm not following, but:

    1. The filesystem caches into its swap space whenever it reads into memory something too large. Do you mean it also reads ahead when Perl opens a handle, or has read a certain amount, so that it saves time?

    2. No time to use anything like BerkeleyDB now, but I'll remember it for the future, though, it sounds useful.

    3. What exactly do you mean row-read-write? Regular line by line? How is that helpful?

    10x again,
    me

    --- Find the River

    Edit by tye

      I was too brief, apparently.

      1. The filesystem caches pages, not files. So while perl is reading line by line, it probably reads often from the same page. Every time that page is read from cache. Works quite efficient for sequential reads.
      2. The DB is quite easy to use, it has a tied interface (aka you can approach it just like a hash).
      3. Indeed regular line by line. That way you can reduce the size of the data to reduce memory usage. For example, you can remove double spaces, remove unneeded data, or write numbers as bytes, etc etc, without having to store everything in memory.

      Jeroen
      "We are not alone"(FZ)