in reply to Re^6: Scraping Ajax / JS pop-up
in thread Scraping Ajax / JS pop-up

Thanks again for the suggestion :P, but I have more than an handful of patents for (and presented at conferences) on advancing the state of the art in the field of computer networks.

Naturally :)

I don't claim to be an expert in all areas, but you should probably take a less arrogant and condescending tone if you truly wish to be helpful in a forum for general perl questions. "Maybe you should learn about the internet" doesn't help anyone, and it should be obvious that my question was valid and posed by someone with knowledge beyond the content of the "learn about the internet" links you responded with.

Well, I disagree. If you carefully review your statements and mine, your opinion might change. I never argued the validity of your question, but you don't appear to have understood any of my answers, which I attribute to a conceptual/vocabulary problem, hence my suggestion.

Keep in mind there are JavaScript mechanized plugins to the modules we are discussing, and the question was in earnest after effort made to do what I'm trying to do. You proposed a work-around to a mechanized approach, which should also suggest the validity of seeking such an approach. Thanks for your time.

Also, this is a perfect example of the clarity of some of your statements.

I outlined three approaches

  1. use firefox + livehttp headers to figure out what HTTP is going on
  2. use firefox (or any browser) and HTTP::Recorder to figure out what HTTP is going on
  3. use an automatable js-capable browser, like WWW::Mechanize::Firefox or Selenium/Webkit/IEAutomation, or WWW::Scripter ( an experimental WWW::Mechanize subclass with alpha level support for javascript )

You dismissed the first two approaches as cheating, and proclaimed WWW::Mechanize::Firefox disappointing because it's not pure-perl.

Replies are listed 'Best First'.
Re^8: Scraping Ajax / JS pop-up
by Monk-E (Initiate) on Feb 16, 2012 at 07:55 UTC
    Not to beat this into the ground, but as I've stated, your 3rd suggested approach is the one I'm interested in. But that's also the one I've been pursuing if you look at my code again. WWW::Scripter along with its Ajax plugin are what my code is using... so all the goodness available in WWW:Mechanize::Firefox you're suggesting to use is where I was stuck in the first place.

    Please do not take offense to the term "cheating" as I am using it in a way synonymous with your "use non-perl X to 'figure out' what is going on" terminology above, since my expectation from the proclaimed JavaScript support is that the module would remove the user from needing to sniff HTTP with tools. The preference for approach 3 is to minimize manually "figuring out the HTTP" behind the JS as much as possible... what's behind the calls can change as the target website changes, whereas that would all be encapsulated if the module is handling it as encountered. Again, thanks for the suggestions.. they may indeed be the route I need to take. And the HTTP::Recorder is a pretty cool module to have handy in general.

      In my experience, you will have to look at the HTTP requests that go over the wire. The only "hands-off" solution that works well for my case is WWW::Mechanize::Firefox, but that should be no surprise as I wrote it. But even with WWW::Mechanize::Firefox, if you care for efficiency or speed, you will have to look at what HTTP requests are made and which requests can be skipped. Also, when automating a Javascript-heavy site, you will have to look at the Javascript to find out what functions to call instead of clicking elements on the page, to get the results in a more formatted way.

      My reason for automating Firefox is that Firefox is a supported and interactive platform. If a website does not work with Firefox, it's the websites fault, not the fault of my program. And I can watch Firefox as it navigates through the website, which is a plus while developing the automation.

      Of course, the module needs Firefox, and Firefox needs a display. There is PhantomJS, but so far I found the (lack of) model of interaction between the browser Javascript and the Javascript within the page lacking.

        So a quick update, to help anyone looking for a similar solution.

        I have a working scraper bot now, which handles the info in the AJAX/JS pop-up. I've had to resort to sniffing the HTTP with tools/browser plug-ins. I then mimic the HTTP POSTs that went over the wire using HTTP::Request::Common. This was the solution I was trying to avoid (as discussed above in this thread), primarily because if a bot needs to be more autonomous than mine, such as crawling, a more programmatic / self-contained solution is preferred. This is what I was trying to explain to Anonymous Monk. I tried several modules and ways to do that without success. But I should note to those who want to try, that I did not exhaust trying all routes that had potential, so more work with something like WWW::Mechanize::Firefox could possibly be fruitful.

        If your scraper is specific to a stable site or does not need to be an autonomous crawler, I would recommend to just cheat the complexity and sniff / mimic the HTTP as described in this thread.

        Thanks. :)

      since my expectation from the proclaimed JavaScript support is that the module would remove the user from needing to sniff HTTP with tools.

      Lets see, an experimental alpha level browser produced by a single man, versus 20 years and millions of dollars browser produced by microsoft/mozilla ... gee, I wonder which one works better