in reply to Connection pooling for a Net::Server application

This is a classic example of creating new problems because you are not willing to fix an existing bad decision.

Read mod_perl: Choosing the Right Strategy. Pay close attention to the 4th alternative. That is the standard strategy for serving high-volume websites with mod_perl, and it is the standard solution for very good reason. Setting up and configuring that should completely solve your problem.

If you follow your idea and set up a connection pooling server, what you'll essentially have done is inserted another tier in your web architecture that everything proxies through. That's a lot of overhead for no gain over the standard architecture. Worse yet, if your web pages sometimes don't release their database connections before sending data back, then you've reinvented your current architecture with more complications and overhead. Conversely if they are too eager to grab and release connections in a fine-grained way, then you've added a lot of overhead on your webservers from grabbing and releasing connections in the pool for no gain over the standard architecture.

If you're absolutely not willing to reconsider the existing architectural decisions, then instead of inserting a connection pooling server I would highly recommend inserting a FastCGI server. Then move your logic from Apache to FastCGI. Your code won't change much and you'll wind up with what is essentially the recommended Apache configuration with slightly different technology choices. That will work well for the same reasons that the recommended Apache configuration does.

  • Comment on Re: Connection pooling for a Net::Server application

Replies are listed 'Best First'.
Re^2: Connection pooling for a Net::Server application
by alech (Initiate) on Aug 29, 2008 at 21:36 UTC
    Hi tilly, thanks for your detailed comment!
    This is a classic example of creating new problems because you are not willing to fix an existing bad decision.

    I am willing to fix this, but not right now, I just don't have the time for restructuring the complete project at the moment.

    The 4th alternative looks like a clever solution, but I don't think it solves my problem - the users are typically on a fast LAN, so buffering for modem users is not an issue. As for the architecture, the mod_perl part is also not the problem, the architecture currently already has one additional tier - nearly no computations and no database lookups take place on the mod_perl interface, but they are all behind the connection the backend OpenXPKI server over a Unix domain socket.

    As for inserting another tier, you've got a point. I still believe if that tier was lightweight enough, it might improve the performance ...

    I'll have a look at FastCGI, but it will probably "only" solve the same problem as the reverse proxy, which seems like it might be attacking the wrong side of the problem. I guess we already have a rather unusual architecture to start with ...

    Cheers,
    Alex

      Even if your users are on fast LAN, if you're serving images from mod_perl, you're running a lot more mod_perl processes than you need to be. You might be able to cut the number of connections to your server from mod_perl in half or more just by reducing the number of mod_perl processes running with a reverse proxy.
      FastCGI will also solve the problem of having to create an expensive database connection per web request. It does that by reusing processes from one request to the next.

      That problem will also be removed if you move the rest of the database interactions out of mod_perl. Heck, if you're right that virtually no database lookups happen then you could play the silly trick of replacing your database handle with a proxy object that will only connect if used. This will mean that any requests that don't touch the database don't connect to it.