in reply to Re: How to download html with threads?
in thread How to download html with threads?

I applaud (and upvoted) your post, but would just point out one thing. Since you are retrieving the entire contents of the urls as a single string, and then processing that string using a single regex, the cost of pushing the data to a shared queue, reading it back to process it and then passing the concatenate results to another thread via another queue is going to cost far more than it will ever save.

You are also starting multiple threads all appending to a single file, but you are not mutexing the writes. In the olden days, it was generally considered safe to write append mode to files from multiple processes because CRTs guarenteed 'atomic' writes in append mode. It's not at all clear if any or all builds Perl uses the underlying CRT for this. Nor is it clear whether any or all CRTs make the same guarentees when called from multipe threads.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."
  • Comment on Re^2: How to download html with threads?

Replies are listed 'Best First'.
Re^3: How to download html with threads?
by Trizor (Pilgrim) on Jul 31, 2007 at 06:26 UTC

    There aren't multiple threads on a single file in my example code,only the capability becuase WriteOut was wrapped in a sub to be made a thread. Only one writer thread is created, to atomically dequeue processed data and write it out.

    As for the overhead issue, while in its current state the overhead doesn't merit separate threads, if this grows and starts using some form of HTML Parser in the parse stage then the split begins to make more sense as HTML parsers can be slower than downloading the document to feed them, separating the processes allows the download to finish faster and make room for the parsing.