Sort the lists independently and then do the dupe elimination as part of the merge. And yes with data sets that large going to disk is required. But thats a good thing, you dont want to hold all of those millions of strings in memory at once do you? Using on disk methods you can sort your sets with a disk based file sort tool that will be more efficient than perls generalized in memory routine. Then the merge is order N (ie a single pass over the file).
Actually, iirc you will find most on disk sort tools will have dupe elimination built in.
In reply to Re^5: better union of sets algorithm?
by demerphq
in thread better union of sets algorithm?
by perrin
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |