Interesting that you decided to cache at the hour level, rather than the day.
I chose the hour and not the day because it ended up producing a "reasonable" number of entries in the cache, as I typically deal with data spread over 30 days my cache is usually around 720 entires. If you are dealing with times that are spread over only a day then I would suggest you go to the minute level of resolution, which would mean a cache around 1440 entries (both numbers are actually low when you factor in the behaviour of perls hashes and the amount of space actually used up).
As for the analysis side of it I think its pretty clear. We are both manipulating strings. A hash lookup on a string of the sizes we are dealing with is far less work than dissescting the string into the required sizes and order (and perhaps supplying additional values) pushing them onto the stack, having timelocal pull them off the stack, build a fragment that it can use to check its cache and return the value over stack again. If you add it up its probably 4 or 5 times more operations (depending on how you defined the term) to call the subroutine, which in both of our cases will most likely be for a time we have already encountered.
In reply to Re: Re: Re: Re: Re: Adventures in optimization (or: the good and bad side of being truly bored)
by demerphq
in thread Adventures in optimization (or: the good and bad side of being truly bored)
by revdiablo
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |