In no particular order...
- Wallclocks are the elapsed time from start to end. Which is usually what you want if you are trying to profile the difference between reading data from a file vs a DB, because you care about the acctual time spent. But you have to be warry of wallclock seconds, because if your system is doing nything else while you run the test (ie: if you start surfing the net while waiting for the test to finish) then it will skew the results.
- It's very concievable that your DB time might be faster then your "cache" time -- if you cache is on disk, and your DB caches things in memory.
- Someone else in this thread (i don't remember who at the moment) mentioned that there are ways of profilifng your whole program, not just snippets -- this is a very good idea, for two reasons:
- You might spend a lot of time optimizing what you think is the slow part of your code, only to find out later that something else is where the real time is wasted.
- running N iterations of hte same piece of code may not give you an accurate picture of what happens when your programm runs "in the wild". if you are working on a DB based App, and you have a caching mechanism for dealing with followup lookups on the same data, then running db_lookup(42) 10,000 times followed by the cache_lookup(42) 10,000 times won't neccessary give you a good picture of reality -- the hardware might start caching the disk page the file is on, or the DB might start caching the rows in memory, making your times much faster then they normally would be if you did 10,000 calls to each on 10,000 different Ids.
Bottom line: make sure what you are profiling makes sense, and is what you want/need to be profiling.