As others have pointed out, and as I tried to bring to your attention in your previous thread, you are simply generating too much data to hope to be able to load it all in memory in a 32-bit process.

In a trivial experiment I conducted before responding to your first thread, I generated a 100MB file consisting of 2 million lines of 'phrases' generated randomly from a dictionary. I then counted the (1-4) n-grams and measured the memory used to hold them in a hash. I used a simple compression algorithm, and it still required 2GB of ram. I repeated the exercise for 150MB/3 million line file and it took 3GB.

C:\test>head -n 2m phrases.txt > 884345.dat C:\test>884345-buk 884345.dat words178691 ngrams13962318 perl.exe 4564 Console 1 2,102 +,076 K C:\test>head -n 3m phrases.txt > 884345.dat C:\test>884345-buk 884345.dat words178691 ngrams20850624 perl.exe 5724 Console 1 3,185 +,344 K

If this is in any way representative of your data, your 1GB file will consist of ~20 million lines and require 10GB of ram to hash.

If you are using a 64-bit Perl and a machine with say 16GB of memory, then building an in-memory hash is a viable option.

Otherwise, you will need to use something like BerkelyDB or a full RDBMS to hold your derived data.

But the missing information from your both your threads, is how you are going to use this data? If this is one file that will be hashed once, or once in a blue moon, with the hash being re-used many times by long running processes, then building the hash and storing it on disk in storable format may be the way to go.

On the other hand, if the hashed data going to be used by lots of short lived processes --eg. web pages--then the load time for a 10GB hash would be prohibitive.

If you need to repeat the hashing process on many different large documents and will only use the hash to generate a few statistics for each, then a multi-pass batch processing chain probably makes more sense.

Finally, if the process must be repeated many times; and you have a pool of servers at your disposal, or are prepared to purchase time on (say) Amazon's EC2, then tilly's map/reduce suggestion makes a lot of sense.

As is often the case with such questions, picking the 'best' solution is very much dependant upon having good information about how the resultant data will be used.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

In reply to Re: a large text file into hash by BrowserUk
in thread Reaped: a large text file into hash by NodeReaper

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.