First, I will say this post involves using Perl as CGI. I've seen some Perl communities which tear people apart before the TLA, CGI even gets to the 'G'. I really do not come here often enough to really know if this is one of those communities. Yes, I use Perl for things other than dynamic web pages. No, I realise that any language can be applied to CGI, not just Perl. Please, I would prefer not to hear that stock lecture... again. I basically post here when I am looking for information to everybody's favourite scripting language.
While working on a Content Management System (and a horribly designed one at that), I found that the scripting languages I was using so that I would not have to security check as much as I would before. I had three languages designed... none of them XML based (yeck! (Actually, that is not XML discussion flamebait, it is more to reveal the more active style of the scripts, and that the supporting code is more of a custom thing)). These are mostly to make creating and editing pages using the CMS easier. Perl is a decent language to learn, but the Moshes probably don't want to learn it. I found that putting the interpretors in memory, and loaded, for each CGI call. Something that seems to be a bit much to me.
One of the designs I am going for is, that in the theoretical circumstance that I would happen to get slashdotted, my site would take longer to get knocked off the web temporarily. Another is security.
In this CMS software, I already am having a server pool, set to queue things to various SQL servers that limit sockets (To get rid of the time wasted opening and closing sockets, this keeps them open all the time, and queue the appropiate data to the server). It was only natural to think, maybe make the interpretors their own server.
I could do a quick demo in perl, to tide me over. But, really, that would just be a bad design issue. Then it occured to make the language server in C++. However, the three languages I've designed are not really that... well, not so much advance. One is a Forth dialect (more than anything else), the second is a badly done Lisp dialect (I am thinking of looking at the hyperspec for that), and the last really only is a forum markup interpretor (you know, when you go onto a forum and type b for <b>). This would make the server... well, it could be better.
Now, currently I am working on a compiled language for GBA and SNES, which only requires me to make libraries to match the various hardware specs for SNES and GBA (yes, they may be slightly less powerful than X-station-cube 4, but still nice pieces of hardware in their own right). The language is a Forth dialect that will most likely end up resembling Postscript more than general Forth, which is kind of the plan.
Why is that important? Well doing that has taught me how to look at details in other scripting languages, think of ways that the internals work, and possibly, with more experience be able to make a duplicate. This is what has led me to believe that I could make a server that accepts perl scripts as part of its calls.
It would be done on a threaded basis (think internal *n?x kernel threaded, not the posix libraries), where each script currently being serviced would have a certain ammount of time with each pass, and the next script/symbol_table would be given that much time, looping around, so that if a script did not make it through the first time, it would start off from where it was before. Keep in mind, the version of the *n?x kernel code was from the publishised version of the often bootlegged AT&T UNIX Source Code 6th Edition, Which just might be somewhat dated (might?!?). Generally the interpretor would be seperated from the symbol tables, scripts and data used by passes, and those will be run through interpretors in what should be an shared and effiecent matter.
I would prefer to take a look at the code, but currently am unable to download it (ISP issues), but if worst comes to worse I might be able to do a Perl-dialect for this idea (ew! I know, I know, it would never match up to the shear genius of Larry Wall)
For the client, the client will mostly just be located in the same location as the usual program's shebang, and will just do the appropiate response. I am going to guess that information is sent to the program at the shebang via STDIN. However, it is extremely easy to confirm, but I am on the wrong computer to do that.
This is all nice and such, but then there is security. The only thing I can really think of to do, is (1) limit IPs, with a default to 127.0.0.1. (2) Require some manner of username/password authentication. For username/password, I am haphazarding that if perl normally reads from STDIN, just putting two lines at the top of the script:
#!/usr/bin/perl # 127.0.0.1 # DakeDesu # egassemniddeh
or something similar (really, that bit is arbitrary)
The general savings in memory would be for if the perl interpretor was being called and put into memory several times. So, mostly only on a Web Server, would this get decent results.
Yes, there are other technologies that I believe are similar. (Please correct me, but don't hijack the thread about it, please :) )
There is no way that my concept is new, as I think I've heard about mainframes, back before I was even born, running central BASIC servers. So, the major question I am asking is why hasn't it been thought of or done before now with perl? I would be more than happy to work on it myself, but I can tell you right now, it would not even yield anything within the next year or two (which really doesn't impress the Moshes).
|
|---|