1. I read through the info on HTTP::Recorder. It seemed promising (its cool in its own right... thanks for the link), but unfortunately, it doesn't support JavaScript interactions (as mentioned there and on CPAN).
2. The problems I mentioned having with mechanizing button clicks on this page using WWW::Mechanize look to be the same for WWW::Mechanize::Firefox, which inherits from it. CPAN shows the methods for button clicks expect them to be within a form. But if I understand your suggestion, its to use the Firefox plugin (LiveHttpHeaders) to avoid any button clicks / AJAX/JS calls, and just make the equivalent http requests directly from my scrape-bot code. (?)
I'll def give it a try, but I'm disappointed that it sounds there isn't any way to do this strictly with Perl modules without needing 3rd party software to "cheat" to avoid the JS/AJAX. Seems like something should be out there to do this with Perl, as I'm certainly not the first to want to scrape such a page.
Thanks again, I'll still try your suggested work-around.In reply to Re^2: Scraping Ajax / JS pop-up
by Monk-E
in thread Scraping Ajax / JS pop-up
by Monk-E
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |