in reply to Hashing Memory Usage

Without seeing the code in question (or at least something stripped down that produces similar bloat under the different perls) I don't know if you're going to get a good response.

Having said that, one alternative when you start running out of RAM when processing a hash is to start tossing the data into something on disk using BerkeleyDB or the like. You'll lose some speed but you shouldn't hit the same memory wall.

Replies are listed 'Best First'.
Re^2: Hashing Memory Usage
by awkmonk (Monk) on Jul 12, 2006 at 15:59 UTC
    Ah, good point. This msaaivley cut down version still produces the same bloat.

    Rewriting the code is an option, but not one I'd like to go into unless there is no other hope.

    #!/usr/bin/perl -w use strict; my %a = (); my $res = `ps v $$`; print "$res\n"; for my $line ( 1 .. 19000 ){ for ( "AA" .. "DZ" ){ $a{$line}{"$_$line"} = $line; } } $res = `ps v $$`; print "$res\n"; exit 0;

    'I think the problem lies in the fact that your data doesn't fit my program'.

      Maybe with your cut down script, Devel::Size could provide some insight (at least give you a diff on the two architectures between the structure size and the structure+data size).

      -derby