in reply to Re: Getting a website's content
in thread Getting a website's content

It doesn't terminate after 15 minutes i just get fed up and close the command prompt window. Am not too sure about WWW::Mechanize as i would want to do a web crawler which means that all the links it finds they would have to be stored and then check if the first link stored has been visited before and if it hasn't then visit it and get its html content too. Thanks.

Replies are listed 'Best First'.
Re^3: Getting a website's content
by marnanel (Beadle) on Jul 24, 2005 at 20:11 UTC
    It could be recursively getting pages further and further into the hierarchy. I don't know WWW::Robot too well, but you probably want to write something for follow-url-test to see.