eibwen has asked for the wisdom of the Perl Monks concerning the following question:
It seems certain wiki-esque websites store image file to site content entity mapping in a database, eg ContentID => ImageID, thus correlating files of the form ImageID.ext to the source content. Further, a single Image table is used for multiple different content types, such that a given ContentID != ImageID, excepting a possible edge case.
The intent is to create a local mirror of the images and the ContentID => ImageID correlation for a specific content type; however, the site has explicitly requested users not to scrape the HTML directly. While I could `seq|wget --random-wait` and similar, I have chosen to defer to the site's request and use an indirect approach.
To this end, I have been considering indirectly scrapping the site during normal browsing sessions using a grease monkey ajax script to a localhost CGI to a DBI::SQLite database for the ContentID => ImageID mapping and then copying the images directly out of the Firefox cache...
Lastly, note that the content pages with corresponding images used to derive the ContentID => ImageID mapping are served by CGI and thereby bypass the standard cache -- otherwise I would parse the HTML from cache over the grease monkey ajax to localhost CGI approach.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Perl Module for Mozilla Firefox Cache (Metadata + Data)?
by Anonymous Monk on Aug 23, 2011 at 15:40 UTC |