improved WWW::Search?
No, this is not about search engines. A "query" targets a specific web page (usually a HTML or XML page), or a superposition of related pages that share the same structure and whose URLs differ only by some parameter.
Whenever it is executed, the query downloads its target page and extracts information from it, where said information is logically structured as tabular data, and is returned as a sequence of hashes/objects (where each hash/object represents a "row" of the tabular data).
use Web::Query;
use Web::Magic;
use Web::Scraper;
my $object = ... ; ## xpatish data extraction using magic above
Yes, it's totally possible to do it using those modules, just as it is possible to use LWP::UserAgent and XML::Parser/XML::LibXML/etc. directly as I have been doing.
But then each query definition ends up being a block of imperative source code of some form, which of course provides maximum flexibility, but which I find inconvenient to debug and maintain when dealing with lots of queries. What I'm envisioning, is a declarative approach that is tailored to the use-case of extracting tabular data as described above (and in the future maybe other regularly structured data, such as "scalar" or "flat list"), and allows the user to specify all required information for specifying the query as Perl data (i.e. a hash) rather than as imperative code.
It will have less power and flexibility than the modules you listed, but will hopefully allow query definitions to be neater and more regular, and thus easier to maintain.
PS: Although Web::Magic does look pretty sweet...
|