Instead of slurping in the entire %visited hash at once with storable, try using a lightweight database instead. You can use DBI and DBD::SQLite, for example. The point is that you probably need to give up on keeping the list of all sites you've visited in memory, so you need to come up with a solution that gives you quick random access to the data stored in more permanent storage like a hard drive. Flat files don't scale well, so a quick-and-dirty database would be ideal.

Each time you visit a site, plop its URL into the database. And each time you wish to consider following another link to a site, check to see if it's already in the database or not.


Dave


In reply to Re: Using text files to remove duplicates in a web crawler by davido
in thread Using text files to remove duplicates in a web crawler by mkurtis

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.