in reply to Re^2: better union of sets algorithm?
in thread better union of sets algorithm?

Oh, yeah, i did mean intersection. Union is even easier. Use the same procedure and ignore duplicate lines that are adjacent. This means you never have more than two lines in memory at once.

---
demerphq

Replies are listed 'Best First'.
Re^4: better union of sets algorithm?
by perrin (Chancellor) on Mar 11, 2005 at 15:36 UTC
    The items being sorted on are strings. It seems to me like merging them and sorting them together before scanning to remove dupes would make this slower than the hash approach, especially since I may have to go to disk if the combined size is a couple of million entries or so. I guess I should really bench it.

      Sort the lists independently and then do the dupe elimination as part of the merge. And yes with data sets that large going to disk is required. But thats a good thing, you dont want to hold all of those millions of strings in memory at once do you? Using on disk methods you can sort your sets with a disk based file sort tool that will be more efficient than perls generalized in memory routine. Then the merge is order N (ie a single pass over the file).

      Actually, iirc you will find most on disk sort tools will have dupe elimination built in.

      ---
      demerphq