However, if for whatever reason you find yourself with a substantial acceptance testing phase at the end of your lifecycle, you could try to apply the technique of velocity tracking through historical comparisons. If you have historically worked on similar projects with similar teams (a dangerous assumption), you could predict the effort for an upcoming acceptance testing phase based on the effort exerted in these previous acceptance testing phases.
Another approach is to chart bugs versus time and bugs versus testing effort. While this doesn't help you estimate how long the acceptance testing phase will take, it may help determine when you are approaching the end. The "bug curve" will "roll off" as most of the critical bugs have been found. This technique is most useful for larger projects where there are enough developers and bugs to form a statistically meaningful sample.
While the two above techniques may be helpful, I think the best approach is to incorporate more testing into each increment and minimize, if not eliminate, a dedicated acceptance testing phase at the end of your software development lifecycle.
In reply to Re: OT: Agile programming; velocity during testing?
by eieio
in thread OT: Agile programming; velocity during testing?
by Whitehawke
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |