I'm not aware of any such scraper. I would first try to subclasss WWW::Mechanize to use some event-based parser or even regular expressions to extract the forms from the response. To save more memory, either do the parsing in the :content_cb callback directly, or store each page to disk and then separately parse the content from there again, either for forms, or for data.
The current trend within WWW::Mechanize skews somewhat towards using HTML::TreeBuilder for building a DOM, but if you have proposals on how an API would look that sacrifices the content for less memory usage, I'm sure that I am interested, and maybe other people are interested as well.
One thing I could imagine would be some kind of event-based HTML::Form parser that sits in the content callback of LWP, so that WWW::Mechanize (or whatever subclass) can extract that data no matter what happens to the content afterwards. But I'm not sure how practical that is, as the response sizes I deal with are far smaller.
In reply to Re^3: Are there any memory-efficient web scrapers?
by Corion
in thread Are there any memory-efficient web scrapers?
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |