in reply to Re^2: tying a hash from a big dictionary
in thread tying a hash from a big dictionary

How many lines has your file? How many of those are you succeeding in loading before you run out of memory?


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re^3: tying a hash from a big dictionary

Replies are listed 'Best First'.
Re^4: tying a hash from a big dictionary
by Anonymous Monk on Oct 31, 2011 at 14:06 UTC
    I have around 200m lines. I don't know after how many lines I go out of memory since I haven't calculated yet.

      The addition of the following 3 lines should tell you with sufficient accuracy after a single run:

      sub read_dict{ local $| = 1; ##! my $file = shift; my %dict; open( my $fh, "<:encoding(utf5)", $file ); my $c = 0; ##! while( <FILE> ) { printf "\r%d\t", $c unless ++$c % 1000; ##! chomp; ## no need to chomp twice my ($p1, $p2) = split /\t/; push( @{ $dict{ $p1 } }, $p2 ); } close $fh; return \%dict; ## main space saving change; return a ref to the ha +sh }

      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        running on a 4 gb machine, it will run out of memory after about 5m entries!