Corion requested in a chatterbox conversation that the spider wait at least as long before initiating a new request as the last request took. (tye, too, I see now.) Reprinting your calculation from the project wiki, at 1 second per page...

620,000 pages will take 620000/(60*60*24) = 7 days.

Is that a decent estimate? I dunno. I'm not in the habit of writing spiders that make 1 request per second against a single server for a week solid, and I don't know how the adaptive waiting algo will perform.

After thinking things over, though, I'm less concerned than I used to be about the total time. We can afford to be patient. We'll only have to do this once, and we'll be able to serve up useful results long before we have a complete set of documents. In web search, precision trumps recall. Spidering does seem like kind of an inefficient way to acquire what's effectively a database dump, though.

The thing I've become much more concerned about is our lack of access to node rep -- as you pointed out to me, it's not available unless you're logged in and have voted on a node.

Being able to feed node rep into the indexer will make a huge difference in terms of providing a high-quality search experience. We're talking about the kind of thing that allowed Google to differentiate itself from Alta Vista in 1998, when PageRank was introduced. Google's great innovation was to use link analysis to calculate an absolute measure of a page's worth. We already have such an absolute measure; we need to use it.

Without factoring node rep into the scoring algo, we'll be relying on tf idf -- the top results will be those rated "most relevant" according to that algo, but not necessarily "high quality nodes" as judged by the user community. Combining TF/IDF with node rep, though, will make the "best" documents among those judged "relevant" float to the top. The perceived quality of the top results will be greatly increased... People will find the good stuff faster.

We can build a prototype without node rep and Super Search will still be much improved. It will still be a much better search than you find on the vast majority of websites out there... But it won't live up to its highest potential. Node rep is crucial metadata to have.

--
Marvin Humphrey
Rectangular Research ― http://www.rectangular.com

In reply to Re: Best way to crawl PM's XML nodes. by creamygoodness
in thread Best way to crawl PM's XML nodes. by dmitri

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.