Well if I run the script with 2-3 people using it. It runs hassle free. The issue happens when I make it live.(but CPU usage doesn't go up)
I can try though..but since the code is written in a centralized environment. It is more dynamic, example:
I write in html: <*Get List|type:all*>
so only thing I can really comment out are the objects. Rest is kidna dependent. For a better understanding here is my code structure:
I have 4 name spaces:
main - the main namespace which calls the other namespaces and handles outputting of data from common(headers) and frame(html).
common - stores all the basic stuff like opening files, accessing database and etc. it also loads query strings and headers.
frame - loads the html, the advanced functions like login account and etc which is made up of calling the common. After loading HTML it loads the Skin namespace on the loaded html.
Skin - loads the layout, all the images, textboxes, forms, assigns ids and other more advanced objects.
Sigh..my urge to have a centralized structure is probably my biggest downfall..its good and convenient, but if the core has issues, makes it much harder to debug :(
| [reply] |
Well if I run the script with 2-3 people using it. It runs hassle free. The issue happens when I make it live.(but CPU usage doesn't go up)
Have you tried loading your site up from 3 clients to whatever your production load is? If your server is only set to handle a certain number of requests at a time, your other requests may be waiting behind those that are currently processing. These waiting requests would not necessarily cause the load of the machine to increase.
Try bringing up the number of clients slowly and see when the problem starts. Then tune your web server / web farm to handle the required number of connections.
| [reply] |
my urge to have a centralized structure is probably my biggest downfall
It seems to me that you know what your problem is, but just don't want to admit it (to yourself) :)
Vis.
I have 4 name spaces:
main - the main namespace which calls the other namespaces and handles outputting of data from common(headers) and frame(html).
common - stores all the basic stuff like opening files, accessing database and etc. it also loads query strings and headers.
frame - loads the html, the advanced functions like login account and etc which is made up of calling the common. After loading HTML it loads the Skin namespace on the loaded html.
Skin - loads the layout, all the images, textboxes, forms, assigns ids and other more advanced objects.
You're replicating everything, for every request. And hoping that mod_perl/perlez will take care of things. I think you are expecting far too much of them.
When I said earlier that I know nowt about them, I meant that I know nothing about tuning them. I do know what they do, but have never had the desire to use them. They seem altogether too hackish to me.
I've no use for Apache either. For example, you said in your OP that your memory consumption is approaching 1.7GB. In that same space I can install 3500 concurrent copies of TinyWeb, each capacble of servicing dozens of concurrent requests.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |
Well centralized system is good..when it works...
example: perl..if the machine code makes perl possible its good..but when there is a fault with the machine code itself..it makes things more difficult...Due to the way the site is, a central system just seems to make more sense
Plus 3500 is kind of cheating, 2 reasons being is out of the 1.7gb, I would say 900mb or so is the actual number. The rest are things like windows services and etc...2nd is even if you do run 3,500 instances, they would each also spawn their own perl.exe, which in turn would consume more resources, no?
ModPerl/PerlEX isn't really that bad..because it saves on starting up. perl's dll itself is around 900kb. Loading it up 1000 times would mean 900,000kb. So even if I were not to use it's pre-loading abilities, I save up a lot on startup. The downside though is their lack of documentation and abilities to debug. Every time I do an update to the code forces me to reset the entire server and start it up again. This makes debugging it live environments living hell.
I am probably gonna have to for now just buy more servers and load balance it till I can find a solution to the issue. I was hopping though if anyone was familiar with the way modperl/perlex works and experienced similar issues. Since well, the execution of the code is only 0.2 seconds..but it takes 20 seconds to start up..which doesn't make sense...other then when making threads it having issues accessing the same namespace. Also one thing perlex doesn't use main as its primary namespace but instead uses PerlEX::Instance ID blah blah blah..so I am thinking maybe I am forcing it into using thus causing slowdowns..but these are all *hunches* and I can't say for sure, so while I try things, I am hoping maybe someone familiar with what could be the issue.
| [reply] |