please see code for slurp.pl at
All files in dir to Storable.pm data.
[307] $ Out of memory during "large" request for 536875008 bytes, tota
+l sbrk() is 1278849720 bytes at blib/lib/Storable.pm (autosplit into
+blib/lib/auto/Storable/_freeze.al) line 261, at /#snipped#/bin/slurp.
+pl line 17
The path has been sanitized to protect myself and my employer. The machine in question is a Sun Ultra 2 with 2gb of ram (this is a multiuser process server running Oracle and Netscape Server as well, so I dont get all 2gb), and I am attempting to stuff approximately 600mb of plain text data, comprising 4,500 files, into a hash. Storable barfs. How can I use less ram? Or perhaps implement a sequential write so that I'm eating, say, 256mb of ram at a time rather than the whole enchilada?
thanks
brother dep.
--
Laziness, Impatience, Hubris, and Generosity.