in reply to Is there a way to get data from a queried web site without having to parse the resulting HTML?
As an alternative to screenscraping, you could write a quick logging HTTP proxy in Perl and set that in your browser, then use the page as you normally would to be able to take a look at the requests generated. Duplicating them programmatically by following the log output should then be trivial. Ugh, completely misread your question.
No, that's unfortunately not possible unless the site's backend provides for such a facility. (The code running PerlMonks can be told to return XML for many things, f.ex.) If you're dealing with tables, you might want to take a gander at HTML::TableExtract. It has served me well in dealing with pages too ugly to manually dissect.
Makeshifts last the longest.
|
|---|