This is what I'm trying to use and it's not working:
use WWW::Mechanize; my $fh = FileHandle->new("text.txt", "w"); my $mech = WWW::Mechanize->new( autocheck => 1 ); $mech->get("xxx"); $mech->mirror( $mech->find_image(url_regex => qr/captcha/)->url_abs, " +tokei.jpg" ); $mech->get("xxx"); print "type:"; my $cap = <STDIN>; $mech->form_id( 'UserLoginForm' ); $mech->field( "data[User][username]", "xxx" ); $mech->field( "data[User][password]", "xxx" ); $mech->field( "captcha", $cap ); $mech->submit(); sleep (3); $mech->get("xxx", ':content_file' => "r.htm"); $mech->dump_text( $fh ); $fh->close;
Everything works except for the text dump which gives an error which I can't see because it's going too fast, how do I stop that thing anyway ? I'm new to perl and I can't continue this program, I searched but found no info, help please
-Edit- dump_headers works, so why doesn't this work ? I need to see the error, but I don't know how :/
In reply to Help getting text from website using www mechanize by Kesarion
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |