in reply to Getting a website's content

Does it actually terminate after 15 minutes? If not, when does it terminate?

Incidentally, it helps to wrap code in <code> tags.

Replies are listed 'Best First'.
Re^2: Getting a website's content
by lampros21_7 (Scribe) on Jul 23, 2005 at 13:35 UTC
    It doesn't terminate after 15 minutes i just get fed up and close the command prompt window. Am not too sure about WWW::Mechanize as i would want to do a web crawler which means that all the links it finds they would have to be stored and then check if the first link stored has been visited before and if it hasn't then visit it and get its html content too. Thanks.
      It could be recursively getting pages further and further into the hierarchy. I don't know WWW::Robot too well, but you probably want to write something for follow-url-test to see.