Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Improving Memory Efficiency with Storable and Hashes of Scalars (code)

by deprecated (Priest)
on May 31, 2001 at 23:27 UTC ( [id://84707]=perlquestion: print w/replies, xml ) Need Help??

deprecated has asked for the wisdom of the Perl Monks concerning the following question:

please see code for slurp.pl at All files in dir to Storable.pm data.
[307] $ Out of memory during "large" request for 536875008 bytes, tota +l sbrk() is 1278849720 bytes at blib/lib/Storable.pm (autosplit into +blib/lib/auto/Storable/_freeze.al) line 261, at /#snipped#/bin/slurp. +pl line 17
The path has been sanitized to protect myself and my employer. The machine in question is a Sun Ultra 2 with 2gb of ram (this is a multiuser process server running Oracle and Netscape Server as well, so I dont get all 2gb), and I am attempting to stuff approximately 600mb of plain text data, comprising 4,500 files, into a hash. Storable barfs. How can I use less ram? Or perhaps implement a sequential write so that I'm eating, say, 256mb of ram at a time rather than the whole enchilada?

thanks
brother dep.

--
Laziness, Impatience, Hubris, and Generosity.

  • Comment on Improving Memory Efficiency with Storable and Hashes of Scalars (code)
  • Download Code

Replies are listed 'Best First'.
Re: Improving Memory Efficiency with Storable and Hashes of Scalars (code)
by bikeNomad (Priest) on May 31, 2001 at 23:49 UTC
    Why not just a hash tied to a Berkeley DB database instead of messing with Storable and huge files?
    #!/usr/bin/perl use warnings; use strict; use Carp; use File::Slurp; use BerkeleyDB; my %html_files; my $totalSize = 0; my $fileName = 'brick'; my $db = tie %html_files, 'BerkeleyDB::Hash', -Filename => $fileName, -Flags => DB_CREATE; die "can't open db: $!\n" if ! $db; opendir TD, '.' or croak $!; foreach my $file (readdir TD) { next if $file eq $fileName; my $fileData = read_file($file); $html_files{$file} = $fileData; $totalSize += length($fileData); } print "Stored $totalSize bytes\n"; exit 0; # 'good' exit for the shell
Re: Improving Memory Efficiency with Storable and Hashes of Scalars (code) no help at all here
by baku (Scribe) on Jun 01, 2001 at 09:28 UTC

    It sounds like a "sequential write" is exactly what you need here...

    Why not loop over each source file and append them to your output?

    If that's not plausible for some reason, something like Berkeley DB could very well be a solution, but given the sizes you're talking about (600M of plain text, plus presumably some amount of metadata, eg. filenames, plus the overhead and index of the DB file itself), you might be running into the maximum filesize of the DB format (IIRC, DB_File's can only be 2G) in the very near future.

    I assume there's a driving reason this isn't being dumped into binary records in Oracle?

    Um, good luck regardless!

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://84707]
Approved by root
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others surveying the Monastery: (4)
As of 2024-04-24 20:59 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found