in reply to Fork vs pThreads

But if one of the streams is bigger than the other 49, these 49 instead of starting to parse new streams they have to wait for the one that is taking more time to process.

That doesn't ring true. What evidence have for that conclusion?

Also, how many cores do you have?


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^2: Fork vs pThreads
by ThelmaJay (Novice) on Oct 21, 2013 at 10:58 UTC
    I'm using amazon's EC2 m1.xlarge (4 vCPU). Because of waiting_all_children() and because I put some prints to know when each thread begins and ends. And also the time it took from one block of 50 to other block of 50 and it was the same time as the longer one.
      And also the time it took from one block of 50 to other block of 50 and it was the same time as the longer one.

      Of course. How could it be otherwise?

      If you draw 10 parallel lines of different lengths:

      --------------- ------ ---------------- ---- - ----------- -------- ------ -------------- -----

      Is there any way to make the overall width less than the longest line?

      Same thing.

      As an aside, but entirely relevant; running 50 tasks concurrently on 4 cpus will take longer than running those same 50 tasks; but only 4 at any given time.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        Sorry, I'm new to this. I don't understand why 50 tasks concurrently will take longer than running packs of 4.