in reply to Re: parallel downloading
in thread spidering, multi-threading and netiquette
The simplest way to do this is to log all requests to the screen and be fast with the Ctrl-C when things go bad.To the OP -- If it has any chance of going out of control, there should be sleep instructions embedded in the code to reduce load. During debug, these intervals should be fairly long (0.5 - 1 second between requests?). Once you learn the script is well-behaved, you may be able to shorten them somewhat. As the bot writer, you have the utmost responsibility to limit your scans to the bare minimum possible. Not only does bandwidth cost money, but you could be slowing down access for other users. Also, if you are a simple spider, don't do something evil like run it continuously -- run it on a crontab (with a long interval) or manually.
For sleeping between requests, check out Time::HiRes
As to the multithread question, this should be entirely up to the site admin. If he says no, don't spider it at all. If this were my site, I'd consider a multithreaded spider quite abusive, since it would be doing things normal web browsers would not do.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Re: parallel downloading
by hossman (Prior) on Feb 21, 2004 at 20:54 UTC |