![]() |
|
Perl Monk, Perl Meditation | |
PerlMonks |
re: A (kinda) Circular Cacheby mr.nick (Chaplain) |
on Mar 24, 2001 at 00:26 UTC ( #66729=note: print w/replies, xml ) | Need Help?? |
I remember well the previous discussion about this, I'm afraid that I simply couldn't work out how to do it efficiently.
I tried maintaining a seperate hash for accessing the data in addition to the array which maintained the order, but ran into difficulties keeping them in sync. I tried having the hash reference positions in the array, but that would require updating the hash when the array changed order (after a get or del or ins). I tried other methods, but found myself still walking the array to maintain it (dels & ins). Since I was walking the array anyhoot, I just left it where you see it now. What I don't understand is your comment Arranging to constantly scan arrays quickly turns into O(n*n) which is slow algorithmically.. In what way is my O(n) turned into O(n*n) ? Any given single operation of the cache only walks the array once (excluding the splice); isn't that the definition of O(n)? (I'm straining to remember my "big o" notation crap from a decade ago).
I should note that I decided to use an array to maintain the order instead of something akin to $hash->{hits}++ and sorting/dropping by that. I presumed a would be much more costly than a flat array.
In Section
Code Catacombs
|
|