No.
Perl is an easy language to write a prototype in, but is a horrible language for writing a serious webserver. If you try to do it you'll be forced down one of a few basic approaches, and all are bad ideas in Perl:
- Simple single-threaded: This is what most of those *::Simple and *::Lite servers do. This approach can't serve two requests at once.
- Forking: A simple CRUD application will have to connect to the database on each request. When you're under volume, most databases will not survive. This approach is available in some of the pure Perl webservers.
- Pre-fork: This is how Apache used to work by default, and it can still be made to do so. (Smart mod_perl shops tend to use this approach combined with a reverse proxy.) The idea is that you start out with a number of children, and one of them serves the request. So connections get reused across requests. Unfortunately Perl processes use too much memory to have large numbers of them. This is the primary reason why serious mod_perl sites use a reverse proxy configuration. You just can't afford to tie up a ton of memory in a process whose job is to dribble bytes at whatever rate the client's dialup can accept it.
- Threading: My opinion is that Perl threading has all of the disadvantages of threading and none of the advantages. Others don't agree with me, but still even its advocates would not try to use it to scale to a busy website.
- Asynchronous programming: This is a promising approach until you look at how much of the infrastructure people use Perl for is not asynchronous. For example database access is done with DBI, which is synchronous. So your first long-running query takes your website down. Not good.
Those are your basic options. None of them would be a good choice to handle serious volume. Oh, you might be able to make them work, but at what cost in hardware? Why bother when Apache does the same thing with fewer resources?