in reply to Fast shared data structures

I'm not sure on the efficient sharing of memory between processes, but if one assumes this isn't possible, then the solution, to me, seems that you need some other process that caches and *builds* template requests when given the file and the data to be filled in, and returned the text stream that's needed. That is, the first thought that came to mind was backending this with a database (a single server process) , in which you replicate what you have with the shared hash you have. Unless the database server has sufficent ability to mimic everything else that you need, you'll then need a perl server script that sits on top of this; you can pass the raw data from your apache children to this process via either XML (slow) or doing Storable and/or freeze thaw (reasonably faster). Part of this request would be the template file, which you should be able to do a timestamp check and proceed to recache if necessary. Then deliever back the text stream, and you're all set.

Of course, if you then think about it, you don't need the database server anymore, as your perl server can simply act as it.

I think the key point is that if you want to do this effectively, I believe you need a separate process unassociated with the web server in order to retain only one memory store while keeping any reasonable issues of speed, as well as the ability to hot-swap.

-----------------------------------------------------
Dr. Michael K. Neylon - mneylon-pm@masemware.com || "You've left the lens cap of your mind on again, Pinky" - The Brain
"I can see my house from here!"
It's not what you know, but knowing how to find it if you don't know that's important