in reply to Slurping a large (>65 gb) into buffer

pardon my ignorance...but why the need to slurp in clusters of html pages? Is that meant to increase efficiency somehow?..or is there some other requirement for this multi-page read?
the hardest line to type correctly is: stty erase ^H
  • Comment on Re: Slurping a large (>65 gb) into buffer

Replies are listed 'Best First'.
Re^2: Slurping a large (>65 gb) into buffer
by downer (Monk) on Oct 01, 2007 at 16:16 UTC
    Yes, 65 GB. I am downloaded this data from a respected source, now I am trying to incrementally get as much as i can get into memory, process it, and get some more. I think setting the record separator will be useful.
      You didn't answer aquarium's question, which was: why do you plan to load more than a page at a time?
      Yes, 65 GB. I am downloaded this data from a respected source
      /me .oO( Hugh Hefner )