in reply to Re: Using text files to remove duplicates in a web crawler
in thread Using text files to remove duplicates in a web crawler

Using a cpan:://DBI database will work, but using a tied hash with a DB_File or similar backing will be an order of magnitude faster - as well as simpler to script.
  • Comment on Re^2: Using text files to remove duplicates in a web crawler