Re^3: Allowing user to abort waitpid
by Anonymous Monk on Mar 07, 2016 at 19:52 UTC
|
This (waitpid returning) may be the case with OP's legacy environment... Current perl appears to set up the signal handlers with SA_RESTART
Wow, anonymonk... I didn't know that. You're right (well, almost). It's not SA_RESTART, it's
PP(pp_waitpid)
{
...
if (PL_signals & PERL_SIGNALS_UNSAFE_FLAG)
result = wait4pid(pid, &argflags, optype);
else {
while ((result = wait4pid(pid, &argflags, optype)) == -1 && er
+rno == EINTR) {
PERL_ASYNC_CHECK();
}
}
...
(pp.c)
There is some mention in perlipc: On systems that supported it, older versions of Perl used the SA_RESTART flag when installing %SIG handlers. This meant that restartable system calls would continue rather than returning when a signal arrived. In order to deliver deferred signals promptly, Perl 5.8.0 and later do not use SA_RESTART. Consequently, restartable system calls can fail (with $! set to "EINTR") in places where they previously would have succeeded. The default ":perlio" layer retries "read", "write" and "close" as described above; interrupted "wait" and "waitpid" calls will always be retried.
Well, I stand corrected, then.
| [reply] [d/l] |
|
|
| [reply] |
Re^3: Allowing user to abort waitpid
by BrowserUk (Patriarch) on Mar 07, 2016 at 19:40 UTC
|
nixers are a funny bunch.
Ask about how to multiplex a few hundred tcp clients transferring gobs of data, and they'll almost universally suggest using a select loop, or other polled event mechanism; which in modern high-speed comms environments requires polling with millisecond or smaller resolution to be responsive to even tens of clients. And that can consume 60% to 70% of a cpu just polling.
But ask about getting conditional input from the guy sitting at the keyboard, which requires polling no more than once every 1/10th of a second, which will consume so little cpu that it won't even show; and they call it hackish.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |
|
|
select only involves polling when done incorrectly.
And that can consume 60% to 70% of a cpu just polling.
Yeah, when I use select, it doesn't burn CPU when waiting.
| [reply] |
|
|
Well, select (and poll) do have inherent limitations. For each call to select the kernel must check all specified descriptors to see if they are ready. Performance of that depends on the number of descriptors. Also, in a busy application select can be called very often. I think it's fair to call it 'polling'.
With epoll and other similar mechanisms, it's only necessary to register all 'interesting' descriptors once, and then, when IO happens on some descriptor, the kernel checks if the application is interested in it. Therefore, performance is determined by the number of IO events.
If we have a ton of descriptors, but, at any given time, IO actually happens only on some small percent of them, epoll will vastly outperform select, because, yes, it does less 'polling'.
Yeah, when I use select, it doesn't burn CPU when waiting.
But do you have 10k concurrent connections :)
| [reply] [d/l] [select] |
|
|
| [reply] |
|
|
Not a good example, select and poll are indeed slow and not recommended for "a few hundren tcp clients"; use epoll/kqueue/signal-driven IO (well, maybe not signal-driven IO).
| [reply] [d/l] [select] |
|
|
| [reply] |
|
|
|
|
|