in reply to Re^4: Proposal how to make modules using fork more portable
in thread Proposal how to make modules using fork more portable

Gah - I didn't link to the top level post but to some middle mail. The top level mail describes the situation as follows:

Currently it is rather difficult to cleanly terminate a Perl program using fork() emulation on Windows:

The Perl process will only terminate once the main thread *and* all forked children have terminated.

So if the child process might be waiting in a blocked system call, we may end up with a deadlock.

The standard "solution" is to use kill(9, $child), as this is the only mechanism that will terminate a blocked thread.

However, using kill(9, $pid) may destabilize the process itself if the child process is not blocked, but actively doing something, like allocating memory from the shared heap.

So, if a pseudo-fork thread is doing some kind of system call (not only IO, but likely, as IO just takes relatively long) we get a deadlock, as the parent process needs to wait for the child thread to exit, but the child thread will never exit as it is in some blocking call.

Actually, I thought there were more problems with kill -9</c> other than "no cleanup", in the sense that the kill could create another deadlock. But upon rereading the mail, I concur with you that the lockup is mostly caused by the implementation of signals on Windows in connection with the implementation of forks.

  • Comment on Re^5: Proposal how to make modules using fork more portable

Replies are listed 'Best First'.
Re^6: Proposal how to make modules using fork more portable
by BrowserUk (Patriarch) on Apr 01, 2011 at 20:44 UTC
    in the sense that the kill could create another deadlock.

    There is no "deadlock" involved. A process (or thread) that is prevented from running due to unsatisfied blocking IO is simply blocked, not deadlocked.

    And, as demonstrated above, you don't need either pseudo-processes or threads in the mix for that to occur. This will never terminate until some hits a key:

    perl -e"alarm(10); <STDIN>"
    So, if a pseudo-fork thread is doing some kind of system call (not only IO, but likely, as IO just takes relatively long) we get a deadlock,

    First: It doesn't require a system call, any perl op-code that runs for a long time--if I could find one of those pathological regexes, I could demonstrate that--will block interrupts, because since Safe signals were implemented, signals are only seen once the current op-code returns to the run-loop. I'm not sure, but I think that is true of signals on *nix as well as windows rather crappy signals emulation.

    But the result isn't a 'deadlock'--which has particular connotations with regard to threading and locking, but can also occur between two (real) processes using IPC. It is just good old fashioned 'blocking'.

    There is a risk of a true deadlock if a thread is terminated (ThreadTerminate()), in that the terminated thread could leave a mutex or semaphore in teh locked state thereby preventing further progress by the remaining thread(s) in the process. But again, this isn't attributable to either pseudo-processes or Windows signals emulation.

    The same thing can happen whenever you force termination without cleanup--of a thread, pseudo process or real process--that uses any form of locking. Even real processes under *nix using SysV mutexes or even just file locks.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      I would attribute the missing resource cleanup of a pseudo-process-thread to how forked processes are implemented on Windows.

      I would hope that an OS cleans up all resources held by a terminated process. But unfortunately I know that this is not always the case even outside of pseudo-forks, for example with shared memory sections.

        I would attribute the missing resource cleanup of a pseudo-process-thread to how forked processes are implemented on Windows.

        The missing clean-up only happens if the pseudo-process is forceably terminated. And that only becomes a necessary option if the application design assumes that it should be able to interrupt an IO-blocked process with a signal. Which it cannot.

        I would hope that an OS cleans up all resources held by a terminated process.

        From what I've picked up through osmosis rather than through any practical experience, there are various things that will survive an interrupted process under some or all variants of *nix. File-locks on NFS filesystems; SysV memory, mutexes and semaphores; named pipes. I also vaguely recall something about listening ports (I can't remember if these were unix or inet domain ports) that remained open after the process that created them died until they time out. Typically 15 minutes?

        Windows has its own fair share of system resources that can remain in use beyond the process that created them if they are not explicitly cleaned up. Almost any named system object--pipe, semaphore, mutex etc.--will, by design, persist until explicitly closed, or re-boot.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.