The best approach is to read records from the file until you hit some size benchmark (whatever fits in memory, including sorting requirements), sort the records, then print them out to a sequentially-numbered file. Once the entire 20 GB are written out this way, probably to like 50-100 files, you then go through the files in pairs and merge using a line-by-line merge routine. Then go through those files in pairs, and so on until you only have one file. If this is something that has to run automatically, just have a cleanup routine run some reasonable amount of time later and wipe the files if there are more than one.