http://qs1969.pair.com?node_id=191049


in reply to On handling multiple generations and data between them

Once again side-stepping the directly asked question to discuss the broader issue (I always do this :)

How is it that an architecture of this complexity was designed by someone unfamiliar with unix signals semantics?

Understanding of SIGCHLD etc is all detailed within any basic unix text, thus my initial concern would be that the designer/implementor lacks understanding of many other issues that may prevent other (as yet unposted) aspects of the architecture from working as designed.

Further to this, it appears that this massive forking of processes and heirachy of control is somehow related to efficiency. While I'm the first to stand up and shout for joy at the effectiveness of fork() in unixland, linux copy-on-write land in particular, the apache group and others can tell you straight up that it is not the be all and end all of efficiency.

For a start, there is no mention of connection or process pooling within the architecture as described, something that the apache team soon recognised to be a necessity under heavy load.

There is no discussion of statistical optimisation or caching, both of which should be at the core of any serious performance design.

There is no discussion of why threads were dismissed as an option. Given that threads operate within the same memory space, and have their own synchronisation semantics which may well be portable, it would seem a vastly more effective option for data sharing than processes and pipes.

In short, as usual, Not Enough Information for help beyond the most basic (which any decent unix text would have told you).

I also note with amusement the concept of using a fork()ing architecture under windows (Windows process creation is stunningly slow in comparison to unix).

  • Comment on Re: On handling multiple generations and data between them