in reply to Big cache
Thanks everyone c: I got some useful pointers here.
Came up with a pretty simple solution: I just added an import sub to the package that implements the cache loading; all it does is register other packages using it. The load function is then responsible for registering the objects that need to be read from disk or regenerated. Which gives us the following structure:
$cache={ 'package'=>{ 'object'=>[\$result,\&generator,@data], }, };
The order in which the packages and it's objects is registered is kept also; from this we make a stack of indices. And then, way I handled writing to disk:
218 for my $block(@blocks) { 219 220 $block=freeze($block); 221 $body.=$block; 222 223 push @header,$header[-1]+length $block; 224 225 }
Where 'blocks' is the objects themselves and 'header' is a list of offsets into the file, both corresponding to the index stack. One can then just write it all into a single file:
229 unshift @header,int(@header); 230 231 my $header=$signature.( 232 pack 'L'x@header,@header 233 234 ); 235 236 print {$FH} $header.$body;
And then open, seek and read; so I can save data in big blocks but only keep in memory the ones I need.
I will have to implement a mechanism for minimizing reads on consecutive entries, but since the entries are sorted by access, successive cache lookups can be done in one go. Squeaky clean ;>
So yeah, I wasn't sure how I was going to solve the problem, and less than a day later it is essentially fixed. Nice.
free/libre node licensed under gnu gplv3; your quotes will inherit.
|
|---|