in reply to Re^3: Is this absurd, or have I not RTFM?
in thread Is this absurd, or have I not RTFM?

I think we have already concluded that the non-balanced BEGIN/END printouts can be explained by the process being interrupted by a signal before it reaches the code or the buffer is flushed. ... see straces above

What puzzles me is that this all causes Perl to redo the entire destruction of the perl object - passing DESTROY a new reference to the same object.

It might be that the shutdown of the one process causes an interrupt in the other process due to their IPC (preforking uses an IPC mechanism for managing workers). I've seen this happen hundred of times now, sometimes in the parent, sometimes in a child... but only ever in 1 process out of N workers + a parent.

So there might be a Net::Server bug causing unorderly shutdown of the processes... but anyway... That should result in DESTROY methods not being called. Not that Perl calls the DESTROY method twice in the same process

  • Comment on Re^4: Is this absurd, or have I not RTFM?

Replies are listed 'Best First'.
Re^5: Is this absurd, or have I not RTFM?
by salva (Canon) on May 19, 2014 at 15:29 UTC
    passing DESTROY a new reference to the same object

    When perl finds that the ref count to some object becomes 0 it creates a new variable holding a reference to it which is then used to invoke DESTROY.

    When it has to do it twice, two variables are created.

      Yes.... I know.

Re^5: Is this absurd, or have I not RTFM?
by ikegami (Patriarch) on May 20, 2014 at 16:33 UTC

    passing DESTROY a new reference to the same object.

    Of course it's a new reference. The object only gets destroyed when nothing refers to the object (or when in global destruction), so a reference must be created to pass to DESTROY.

      Yes yes... I know. There's still a calling-DESTORY-twice bug though.