in reply to Best way to crawl PM's XML nodes.

Corion requested in a chatterbox conversation that the spider wait at least as long before initiating a new request as the last request took. (tye, too, I see now.) Reprinting your calculation from the project wiki, at 1 second per page...

620,000 pages will take 620000/(60*60*24) = 7 days.

Is that a decent estimate? I dunno. I'm not in the habit of writing spiders that make 1 request per second against a single server for a week solid, and I don't know how the adaptive waiting algo will perform.

After thinking things over, though, I'm less concerned than I used to be about the total time. We can afford to be patient. We'll only have to do this once, and we'll be able to serve up useful results long before we have a complete set of documents. In web search, precision trumps recall. Spidering does seem like kind of an inefficient way to acquire what's effectively a database dump, though.

The thing I've become much more concerned about is our lack of access to node rep -- as you pointed out to me, it's not available unless you're logged in and have voted on a node.

Being able to feed node rep into the indexer will make a huge difference in terms of providing a high-quality search experience. We're talking about the kind of thing that allowed Google to differentiate itself from Alta Vista in 1998, when PageRank was introduced. Google's great innovation was to use link analysis to calculate an absolute measure of a page's worth. We already have such an absolute measure; we need to use it.

Without factoring node rep into the scoring algo, we'll be relying on tf idf -- the top results will be those rated "most relevant" according to that algo, but not necessarily "high quality nodes" as judged by the user community. Combining TF/IDF with node rep, though, will make the "best" documents among those judged "relevant" float to the top. The perceived quality of the top results will be greatly increased... People will find the good stuff faster.

We can build a prototype without node rep and Super Search will still be much improved. It will still be a much better search than you find on the vast majority of websites out there... But it won't live up to its highest potential. Node rep is crucial metadata to have.

--
Marvin Humphrey
Rectangular Research ― http://www.rectangular.com

Replies are listed 'Best First'.
Re^2: Best way to crawl PM's XML nodes.
by ambrus (Abbot) on Jun 11, 2007 at 07:32 UTC
      One of the main things wrong with prlmnks: http://prlmnks.org/html/620384.html (missing at the time of this comment). In fact, nothing seems updated after some time in 2006?

      -Paul

      Wow, never knew such site really exists. The youngest nodes are those created in Oct, 2006. Is it still going? How frequent the mirroring is done?

      Open source softwares? Share and enjoy. Make profit from them if you can. Yet, share and enjoy!

      Ambrus,

      Aside from the other items that have been brought up, I'm not sure how your reply addresses the post you're replying to. http://prlmnks.org doesn't provide a way to get at node rep.

      Perhaps you mean, "What's wrong with the search at http://prlmnks.org?" It's certainly better than what we have here but 1) it's not here and 2) I think we can improve on it further.

      --
      Marvin Humphrey
      Rectangular Research ― http://www.rectangular.com