dlarochelle has asked for the wisdom of the Perl Monks concerning the following question:
I'm putting together a test suite for a web crawler.
In our initial version of the test suite, we hosted files on a publicly accessible web site and hard coded that URL into test cases which had the crawler download and process files obtained by spidering the hard coded URL.
We recently had to stop running the web server that we were directing our crawler to download from in our tests. Which breaks our tests. I'd like to find a way to test our crawler without having to publicly host files. Can anyone suggest an alternative approach?
My initial thought is that the test suite should somehow start up a fake web server which we could have tests try to download from. However, I couldn't find anyone suggesting a way to do that.
Any thoughts?
Thanks in advance.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Testing a web crawler
by BrowserUk (Patriarch) on Mar 22, 2010 at 21:43 UTC | |
|
Re: Testing a web crawler
by ikegami (Patriarch) on Mar 22, 2010 at 20:29 UTC | |
by Anonymous Monk on Mar 25, 2010 at 22:24 UTC | |
|
Re: Testing a web crawler
by pemungkah (Priest) on Mar 23, 2010 at 09:33 UTC | |
by dlarochelle (Sexton) on Mar 25, 2010 at 22:32 UTC |