in reply to Optimizing a web application for the best performance

* Are there any references available which to look for when writing optimized (web) perl applications?

Take a look at the mod_perl performance tuning document.

* I've been thinking "use" and "no" would be of use inside a subroutine to save memory at that time but found out the hard way the "no" did nothing at all to the memory. Probably using "no" will be even a performance hit everywhere over my scripts?

use() will load a module (if not loaded already) and then call PackageName->import(). no() will load a module (if not loaded already) and call PackageName->unimport(). You can probably assume no() and use() are the same in terms of CPU and memory use.

* Is there any speed difference in using if $blah { &dothis } or &dothis if $blah?

If there is, it's minuscule. If you need to worry about it, use XS/C instead.

* Keeping security in mind, lots of parsing has to be done to protect input from being malicious. My way of parsing input has always been "passive"; I will not create an error message with "wrong input" (except for hard-core input errors) but rather filter out the bad input and get the rest to the script; are there any good modules filtering out bad input?

Don't filter out bad input. Only accept good input or make sure that the input doesn't matter (security wise). Filtering out bad input is usually by far the hardest to do and takes the most time.

* Does it help to have a dual core or quad cpu core for perlscripts?

If you run more than one process at the same time, yes it helps.

* Which is fastest? RDBMS (sql) with DBI, DBD or DBM? BerkeleyDB3 or 2? ..

They're not the same. If you need a RDBM, and RDBM is a better option than a simple DB like BerkelyDB. You can probably assume BerkelyDB etc are faster than RDBMs if you limit yourself to the berkelydb functionality.

* Which is best whenever you got to access many files through your script; (ex: gathering metadata from XML files) all automatically? Use a database to cache the files on disk next time they got used? are there methods for this?

Accessing files directly is probably faster than reading them from a DB (assuming the files are reasonably large). Parsing the files might take most of your time. If you can, keep the parsed structure in memory. If not, it might be useful to store the parsed perl structure in a file using Data::Dumper or Storable or something similar.

* Sometimes you need to get one value, for example a MD5 Hash; with non-persistant cgi's (as I understand) once you load the module it cannot be unload again and it will stay in memory; are there any ways to get such value anyways without using the memory overhead?

You can use Symbol::delete_package(), but I'm not sure that will give you back much memory. And you probably shouldn't bother. For most systems, the memory overhead from the loaded code is dwarfed by the memory overhead from the used data anyway.

* When loading a module, does it load the -entire- module or do I only load the code which needed?

Depends on the module. Most modules probably load everything, but at least some large standard modules (like POSIX and, i believe, CGI) load on demand.

  • Comment on Re: Optimizing a web application for the best performance