Each thread recieves it's own timeslice.
Ambiguously worded, but any thread of a process can (and will without extraordinary measures to prevent it) be scheduled on any available core. And different threads from the same process may be running on different cores concurrently.
It depends. On many things.
Threads can have a heavy spawn-point cost because they do their data copying up front.
Forks can appear cheaper up front because of COW, but without extreme care even apparently read-only references to COW pages can cause ongoing piecemeal duplication of data which can ultimately add up to a higher overhead.
All that said, you're asking the wrong questions. In most cases, the choice as to whether threads or forks are appropriate for a specific application are less about how fast you can spawn another.
It really depends upon what your application is doing. Does it need to share data bi-directionally? Does it require a spawn and discard approach or would it be better served with pooling?
In reply to Re: About the 5.10 'threads' scheduler ::revisited
by BrowserUk
in thread About the 5.10 'threads' scheduler ::revisited
by Wiggins
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |