Here's the solution using plain WWW::Mechanize. It fails the XHTML test, because (I think) it uses HTML::TokeParser, and somehow misparses the Six link:
#!/usr/bin/env perl use warnings; use strict; my $file = shift or die; print "##### WWW::Mechanize on $file #####\n"; my $html = do { open my $fh, '<', $file or die "$file: $!"; local $/; +<$fh> }; use WWW::Mechanize; my $mech = WWW::Mechanize->new(); $mech->update_html($html); my @links = $mech->links(); for my $link (@links) { print $link->url, "\t", $link->text, "\n"; }
Since HTML::TokeParser and HTML::Parser even live in the same distribution, I'll look at a pull request to change the parser type to the one that works.
Update: The pull request
In reply to Re: Why a regex *really* isn't good enough for HTML, even for "simple" tasks
by Corion
in thread Why a regex *really* isn't good enough for HTML and XML, even for "simple" tasks
by haukex
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |