With regards to deployment, this is what I have done in the past. Everything here is scripted and it's expected that the script will run from the box being deployed TO.
There is a directory that everything is deployed to. Call it /prod. Everything happens here.
There are two subdirectories - /environment and /application.
/environment is the environment in which /application runs. Within it are subdirectories that have a unique name. I use the timestamp of creation in YYYYMMDDHHMISS form. You can do whatever.
Within each subdirectory is a complete environment. A copy of Perl, all modules, and everything else you'd depend on /usr/bin/perl to provide. If you're paranoid enough, it should also contain its own copies of the various libraries, like libz, libjpeg, libpng, etc. You build one of these every time the environment changes.
/application is the actual application deployment. Within it are subdirectories that have a unique name. I'd the same scheme as for /environment. There is also a current softlink that points to the current deployment.
When you deploy the application, you hardcode the environment that this specific deployment uses. Once it's deployed AND TESTED, you flip the current softlink over to it and restart the application (bounce Apache, etc).
This system, as complicated as it sounds, was designed with the following in mind:
You never touch the system perl. Ever.
If something goes wrong with anything, you can rollback to last known good. This includes bad CPAN modules.
You know exactly when things were deployed and how.
You know every single dependency.
You can build on one machine and, if your machines are homogenous, rsync the others.
You can even deploy this over NFS.
My criteria for good software:
Does it work?
Can someone else come in, make a change, and be reasonably certain no bugs were introduced?