So, I'm trying to make a web crawler and I'm getting errors from WWW::Mechanize when they're perfectly valid links. I even tried getting the link without putting it in my recursive crawler and it worked. I'm guessing the website is preventing me from getting their pages after a certain search limit is reached like the Google Search API. Do some websites have a limit as to how much you can crawl through their website?