BrowserUK,
I just wanted to get back to you and say THANK YOU for all the education you've given me on threads, shared variables, and thread queues. I followed the plan I laid out, said goodbye to all the datagrams and implemented all my IPC through shared variables and thread queues. There were a few speed bumps, but no major damage, and everything came out fine in the end.
After that, I revisited my webserving strategy. The Electronic Brewery's operator console is a web page that loads once, and then refreshes its content every second via AJAX calls to the daemon (now a win32 service). I had waffled between three approaches (a) doing all the serving myself, (b) having Apache doing everything, with the AJAXes hitting a CGI, and (c) splitting the duties between me and Apache.
(a) worked fine, but didn't have all the security stuff that Apache has already figured out. (b) worked, but Apache needed to spawn a perl.exe to handle an interface function that did almost nothing - at great expense. so (c) seemed the best compromise. Apache has figured out all the security for file serving, so it's the right tool for the job for loading the web page and all its appurtanances. The AJAXes are POSTs, url ignored, with with one line of content. So writing, securing, and handling error conditions for an AF_INET SOCK_STREAM server was not that bad.
But once you've found a new toy, you want to use it for everything. Remember Marshall T. Rose's observation that "if the only tool you have is a screwdriver, then everything begins to look like a screw" ? Implementing a concurrent webserver with threads took a lot of time - it took .09 sec to create a new thread, but only .04 sec to service the AJAX request. So I ended up re-writing (doing a lot of that lately) the concurrent server as an iterative server.
Now I'm over on the Javascript side, trying to clean up my works fine, but horribly inefficent, code
Thanks again BrowserUK, you're a wealth of knowledge and a great teacher.
Dave
| [reply] |
Implementing a concurrent webserver with threads took a lot of time - it took .09 sec to create a new thread, but only .04 sec to service the AJAX request. So I ended up re-writing (doing a lot of that lately) the concurrent server as an iterative server.
I assume you were starting a new thread for each client? Perl's threading implementation carries to much overhead with starting a new thread to make that a viable option. Actually, I'd say that even in C that starting a new (kernel) thread for every brief exchange is an extremely wasteful way of handling connectionless protocols.
Far better is to use a pool of threads. For an example of handling concurrent sockets see Re^7: multithreaded tcp listener with IO::Socket. It probably needs checking as it was written back in the 5.8.6 days and quite a lot has changed in the interim. If you are thinking of using it as the basis for anything, let me know and I'll give it the once over.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |