deprecated has asked for the wisdom of the Perl Monks concerning the following question:
The path has been sanitized to protect myself and my employer. The machine in question is a Sun Ultra 2 with 2gb of ram (this is a multiuser process server running Oracle and Netscape Server as well, so I dont get all 2gb), and I am attempting to stuff approximately 600mb of plain text data, comprising 4,500 files, into a hash. Storable barfs. How can I use less ram? Or perhaps implement a sequential write so that I'm eating, say, 256mb of ram at a time rather than the whole enchilada?[307] $ Out of memory during "large" request for 536875008 bytes, tota +l sbrk() is 1278849720 bytes at blib/lib/Storable.pm (autosplit into +blib/lib/auto/Storable/_freeze.al) line 261, at /#snipped#/bin/slurp. +pl line 17
thanks
brother dep.
--
Laziness, Impatience, Hubris, and Generosity.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Improving Memory Efficiency with Storable and Hashes of Scalars (code)
by bikeNomad (Priest) on May 31, 2001 at 23:49 UTC | |
|
Re: Improving Memory Efficiency with Storable and Hashes of Scalars (code) no help at all here
by baku (Scribe) on Jun 01, 2001 at 09:28 UTC |