Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hi there! I wrote a sub which reads, sorts and writes some huge files (about 500MB). (Sorry, I can't read line by line.)
When this sub returns (simply "return 0;") I get a "software exception" because perl.exe couldn't read some memory. Writing data to file is still ok, all data available and not corrupted in any way (as far as I can see :-)
Even if I only read and return I get this crash. Is there any problem known with perl and huge memory usage? (If the files are only abou 400 MB I've got no problems.)
Perl 5.004 or 5.6.0, WinNT SP6, enough memory available (about 1 GB)

Replies are listed 'Best First'.
Re: huge memory usage = software exception
by jeroenes (Priest) on Mar 16, 2001 at 23:21 UTC
    I have had some experience with large files, and at the time tilly provided the golden tip: BerkeleyDB. Available on both win32 and gnix. You can read all about my quest over here. Whatever you can fit in memory, Berkeley's BTree beats perl's qsort by far for large amounts of items (beats it in terms of both memory and CPU ;-).

    Having said that, my xp on linux is that perl nicely dies with an 'out of memory' at the moment I xceed my RAM + SWAP. But that may be different with activestate.

    Another issue is how you your data is organized. Maybe you have an array of 1Mb chunks each, in that case your memory overhead is small. However, if you have a 2000 Mb array of 2 bits, you need at least 80Gb of memory. Array overhead may be large. In my case, a 10Mb array of 2-byte items took me over 400Mb of memory. Methinks you get the picture by now.

    Hope this helps a bit,

    Jeroen
    "We are not alone"(FZ)

Re: huge memory usage = software exception
by Masem (Monsignor) on Mar 16, 2001 at 22:45 UTC
    You definitely need to break up the task into smaller pieces. There's two ways that I can see doing this that will reduce the memory usage.

    There's a sort method that I cannot recall the name of (it's sufficiently uncommon), where you break the records into several small files, sort each one separately, then combine the files 'slowly', sorting as you join, until everything is sorted. You can then write it all back out as one large file, but the key is that you never handle all 500Megs during a sort at once.

    Another possible option is to read through the file once, and extract for each item you want to sort, the necessary keys to sort on and the positive where you are in the file (in bytes). Put these all into a hash, then sort the hash as appropriate. Then, reopen the large file, and using the positions, copy what's necessary into a second file which should be sorted appropriately. The only drawback here is that you need another 500megs of free space to do this in since you HAVE to duplicate the file otherwise you'll screw up the position info.

    Now, both assume that these are flat files (that is, each data piece takes up a continuous set of bytes). If you have something which I can't imagine what, where data for one item is spread throughout the file, neither of these won't work.


    Dr. Michael K. Neylon - mneylon-pm@masemware.com || "You've left the lens cap of your mind on again, Pinky" - The Brain

      I think you are refering to merge sort...

      "The pajamas do not like to eat large carnivore toasters."
      In German: "Die Pyjamas mögen nicht große Tiertoaster essen.
      In Spanish: "Los pijamas no tienen gusto de comer las tostadoras grandes del carnívoro."