in reply to Wondering what could be the issue with mechanize find_link.

The page you are trying to scrape uses a poor HTML markup: many <a> tags are not closed. WWW::Mechanize gets confused and puts all the table into one link:
print $_->text, "\n" for $mech->find_all_links;
However, you can count the pages yourself and build the URL from pieces, not getting it from the page:
#!/usr/bin/perl use warnings; use strict; use WWW::Mechanize; my $search_url = 'http://careers.republic.co.uk/pb3/corporate/Republic +/search.php'; my $mech = WWW::Mechanize->new(); $mech->agent_alias('Mac Safari'); $mech->get($search_url); my $page_number = 1; PAGE: while (1) { $mech->get("$search_url?page=$page_number"); print " *** $page_number *** \n"; print $_->text, "\n" for $mech->find_all_links; last PAGE if $mech->content !~ /Next/; $page_number++; }
لսႽ† ᥲᥒ⚪⟊Ⴙᘓᖇ Ꮅᘓᖇ⎱ Ⴙᥲ𝇋ƙᘓᖇ