in reply to Big config file reading, or cross-process caching mechanism

There is also a third possibility to solve your problem.

It is very easy to just move the part of your code that reads the YAML file to a BEGIN {} block. Then the overhead only occurs on the first execution of your mod_perl program within each httpd process.

The only disadvantage is that all processes will share the same file, but it looks like this is fine, since you have hardcoded the YAML file name anyway!

I use this trick to read files > 100MB into memory to enable fast searches of the data.

If you make a module that does this, you can also load this module in your Apache startup, and then all the Apache daemons can share the same memory.

If the YAML file needs to be dynamic, just have the mod_perl program check the date on the file before using the data. If the file is newer than when you last read it, reread it.

Update I noticed that you are actually reading many YAML files in different directories. It's still easy, just make a hash of YAML file objects, using the directory path as the key of the hash. Then check for the hash to see if the key exists, and only read the YAML file if you need to.

It should work perfectly the first time! - toma