in reply to How to remove duplicates from a large set of keys

With a million keys, you should go for the database.

If you only need hash lookups (as opposed to all the query stuff you can do with a relational database), you could give BerkeleyDB a try, which even ships with Perl (as the DB_File module). Instead of using a query language to manipulate data, it pretends to be a Perl hash.

  • Comment on Re: How to remove duplicates from a large set of keys

Replies are listed 'Best First'.
Re^2: How to remove duplicates from a large set of keys
by BrowserUk (Patriarch) on Feb 10, 2005 at 14:28 UTC
    With a million keys, you should go for the database.

    Why? The OP was concerned with speed.

    I see this as another (merlyn-style) "bad meme". There are plenty of very good reasons for using a database, but *speed* is not one of them!

    Using DB_file is takes over 5 minutes to do what this code does in under 10 seconds. And that is once you've worked out how. The sample code from DB_File does not even compile as printed.

    It may be possible to improve that 5 minutes, if you hunt the internet to locate, read, and understand the Berkeley DB optimisation and configuration advice, but you'll never get near direct file access for performance in this application.


    Examine what is said, not who speaks.
    Silence betokens consent.
    Love the truth but pardon error.