I'm not sure what part of my advice you have problems with. I try to make the steps clear, so please tell me what step I didn't explain thoroughly enough:
Learn what your browser sends, then send that from Perl.
This is meant to tell you to investigate what data your browser sends to the remote webserver when you click a button. The intention behind is that the remote webserver cannot know whether there is a browser or a Perl script at the other side, so as long as your Perl script sends the same data as a browser, it will never find out.
For example, using the Live HTTP Headers extension.
This sentence is to show you a tool that can do the above.
Or learn Javascript and how it interacts with the HTML DOM, and what the click for a submit button does.
This sentence is intended to show you the other, more static, approach to scraping. Read (and understand) the Javascript and what it does, and directly replicate that from Perl.
Or just modify the code to find it out.
This sentence is to show you a variation on the more static approach. By modifying the Javascript code, you can also maybe find out what it does and what purpose it has.
In reply to Re^7: Automate WebLogin
by Corion
in thread Automate WebLogin
by libvenus
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |