I usually use WWW::Mechanize for my scraping needs, but for my latest project, the memory usage is not acceptable. Yes, I do set stack_depth to 0. The problem is that Mech is storing the responses in memory, and then there's the much larger decoded_content if the response was compressed, and then the copy of the html data as it's passed to HTML::Form. Right now, my single process is at 200MB and I was planning on having multiple simultaneous processes.

I know I can save the response content to a file with :content_file, but then much of Mech's other functionality disappears, such as link and form parsing. Does a memory-efficient alternative exist that writes the content to files and incrementally parses the content?


In reply to Are there any memory-efficient web scrapers? by Anonymous Monk

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.