in reply to Navigating through pages using WWW::Mechanize
Generally, web-pages of interest will contain one or more hyperlinks ... text strings of the form:
<a href="somewhere_of_interest">some visible tag</a>
... which will appear somewhere in the HTML text that you retrieve in response to each request that you make. Your program’s task, then, is to retrieve a page, scan its content for hyperlinks “of interest to you” that contain targets (“somewhere of interest ...”) that you realize you haven’t visited yet. You add these targets to your program’s to-do list and continue until that list is finally exhausted.
On the other hand, sometimes a web-page will bury its logic into JavaScript: an onClick handler, say, creates the URL and sends the browser to it. In that case, the simplest approach might be to pick-apart what the URLs look like and to loop over what they could be, until you get 404’s.
Actual sourc-code is left as an exercise to the reader, or to another Monk with more time on his hands than I.