I have had some experience with large files, and at the time
tilly provided the golden tip: BerkeleyDB. Available on both win32 and gnix. You can read all about my quest over here. Whatever you can fit in memory, Berkeley's BTree beats perl's qsort by far for large amounts of items (beats it in terms of both memory and CPU ;-).
Having said that, my xp on linux is that perl nicely dies with an 'out of memory' at the moment I xceed my RAM + SWAP. But that may be different with activestate.
Another issue is how you your data is organized. Maybe you have an array of 1Mb chunks each, in that case your memory overhead is small. However, if you have a 2000 Mb array of 2 bits, you need at least 80Gb of memory. Array overhead may be large. In my case, a 10Mb array of 2-byte items took me over 400Mb of memory. Methinks you get the picture by now.
Hope this helps a bit, Jeroen
"We are not alone"(FZ) | [reply] |
You definitely need to break up the task into smaller pieces. There's two ways that I can see doing this that will reduce the memory usage.
There's a sort method that I cannot recall the name of (it's sufficiently uncommon), where you break the records into several small files, sort each one separately, then combine the files 'slowly', sorting as you join, until everything is sorted. You can then write it all back out as one large file, but the key is that you never handle all 500Megs during a sort at once.
Another possible option is to read through the file once, and extract for each item you want to sort, the necessary keys to sort on and the positive where you are in the file (in bytes). Put these all into a hash, then sort the hash as appropriate. Then, reopen the large file, and using the positions, copy what's necessary into a second file which should be sorted appropriately. The only drawback here is that you need another 500megs of free space to do this in since you HAVE to duplicate the file otherwise you'll screw up the position info.
Now, both assume that these are flat files (that is, each data piece takes up a continuous set of bytes). If you have something which I can't imagine what, where data for one item is spread throughout the file, neither of these won't work.
Dr. Michael K. Neylon - mneylon-pm@masemware.com
||
"You've left the lens cap of your mind on again, Pinky" - The Brain
| [reply] |
I think you are refering to merge sort...
"The pajamas do not like to eat large carnivore toasters." In German: "Die Pyjamas mögen nicht große Tiertoaster essen. In Spanish: "Los pijamas no tienen gusto de comer las tostadoras grandes del carnÃvoro."
| [reply] |