Come for the quick hacks, stay for the epiphanies. | |
PerlMonks |
Re: Caching data with mod_perlby Revelation (Deacon) |
on Jul 14, 2002 at 02:38 UTC ( [id://181546]=note: print w/replies, xml ) | Need Help?? |
A couple things:
However, if you can't use Memoize, please answer my questions! :) I ask them because one may propose a differant module, or a differant tact, depending on how you propose to do this. Of hand, if I didn't use a module, I would create subroutines, so that I wouldn't have to reload subroutines subrefs from a hash all the time: And use it by doing Note: For non OO, $self should not be included, and lines with $self are probably not necessary. Personally, I would use OO for this task, because I think it would be easier to expand, however it is *your choice*. Update To Response: This is the exact purpose of Memoize, however I have given you what is an alternative way of doing this in the above code. The module I created is much like your own, just a module: You can learn from other's modules as well as your own, this is one way to do this, although I prefer memoize. On the subject of using 'our' in the module, I was under the impression that our went out of scope. use vars() allows for the hash to remain in the httpd process, so that it doesn't have to be reloaded, if the script is called by two differant people in the same time period. This means that Joe and Larry can use the cached results of the same routine (I only cached the routine, but you can probably figure out how to cache results...) Although the best option would be to cache results, and if the time period had passed, to just re-execute the ready-made subroutine. In addition, the reason I was using subroutines is that retrieving a subref and calling it from a hash is *probably* more expensive than just executing a created subroutine. Just think of my post as an alternate aproach :)
In Section
Seekers of Perl Wisdom
|
|