I've been estimating programming and project tasks for a whole bunch of years, and while my estimates have gotten better over time, they're still all over the map in some areas. Lately, I've been pondering why this is, and what I can do to improve. What follows is a progress report of sorts, in the hopes that it generates some thinking and discussion.
To help with the pondering, I use a really simple scheme for collecting data1: For each task on the immediate or near horizon, once I'm confident that I understand the task well enough to estimate it, I record
Re-estimation is allowed and encouraged. If I learn something more about a task (or have a sudden inspiration or second thoughts) before I start the task, I record a new estimate. When I start work on a task, at the end of each "session" (typically at the end of the day), I'll record
The bookkeeping overhead is fairly light--on the order of a few minutes a day. The tough part is reconstructing how much time was actually spent on task.
With data in hand (or in Excel, on in a database) at the end of an iteration, it's easy to cull the data and make graphs. Here's where things get interesting. With graphs, it's easy to spot patterns.
A pattern that's surprised me is a tendency on some types of tasks--generally tasks where there are two or more technical unknowns that need to be worked through during the task--for my confidence to drop mid task, and for a creeping period of apparent non-progress to set in, as the work remaining seems to stay constant no matter how much time I put it. A chart for this effect looks something like| rrc e | rr c f | r c f | r c o | rrrr c r |r c t | c | c +------------ MMMMLLLMMNHH time ->
Here, 'c' represents work completed, 'r' represents work remaining (plotted as a stacked bar on top of work completed), and L, M, and H are confidence indicators.
At a coarse level, the data suggests that I should either be more aggressive in breaking tasks in smaller pieces, or I should bump an estimate up about 15% for each additional (n>1) technical unknown in the task.
The data also confirms something I've known but haven't admitted: I'm really, really bad at estimating how much effort is required to get a UI looking good. I can accurately estimate how long it's going to take to prototype a page in a web app, for example, but the "fit and finish" work almost always take a lot longer than I think its going to. But now I know, and can adjust my estimates.
At the risk of going overboard with data collection, I'm thinking about keeping a running count of significant interrupts that happen while I'm on task. I know interruptions are a problem, but don't have a good handle on how much they cost.
---
1 I make no claims of originality on my data collection scheme.
I have a vague recollection that the Personal Software Process (PSP) does
something similar, but at a higher cost.
In reply to On Improving One's Estimates by dws
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |