|Don't ask to ask, just ask|
What is the best way to compare profiling results before and after a code change?by ELISHEVA (Prior)
|on Apr 11, 2009 at 18:00 UTC ( #757042=perlquestion: print w/replies, xml )||Need Help??|
ELISHEVA has asked for the wisdom of the Perl Monks concerning the following question:
Recently I was trying to profile a script and noticed that when I ran the script two or more times in sequence the total time and the breakdown between the different start-up (perl) and actual execution functions seems to change with each run. On a short script the variance can be as much as 30-40% (e.g. ranging between 0.040s-0.090s). Even on longer running scripts the variance is often in the 10% range.
This variance happens even on a single user machine with nothing but OS related background processes running. Obviously, those can't just go away, so I presume this variance is just a given of the profiling process.
However, this raises a question for me. How do I tell if a code change really improves performance? If I take only one profiling result before and after I can't really tell if the "improvement" is due to the code change or just an artifact of the background processes and resource sharing at the time of each profiling run.
Of course, I could do several profiling runs before and after and take averages. Is this something others do? Is there software designed to run profiling several times and calculate the statistics? And if we are going the statistics route, how many runs are needed to get a reliable result? Is an average really the best "central tendency" for comparing before and after results (alternatives: median, mode, min, max)?
Many thanks in advance, beth
Back to Seekers of Perl Wisdom