in reply to Re: On Improving One's Estimates
in thread On Improving One's Estimates

If your organisation’s culture isn’t amenable to this level of scrutiny, then you’ll probably have a hard time with it.

The problem I have with the type of scrutiny you describe (and I've been there before) is that the information flow is entirely too much of a one-way street. Information gets extracted and shovelled into the gaping maw of Microsoft Project, with little feedback other than "hurry up!" to the troops. This serves the interest of the top-level stakeholders, but fails to help me recognize patterns in my estimation behavior.

Without seeing these patterns, how can I improve? It does me little good for a spreadsheet somewhere to say "When Dave says X, add 30%". There's little instructive value in that number. Is that 30% on all estimates? On some? Am I more accurate when estimating some classes of tasks than others? Are there patterns I need to know to recognize, so that I can avoid the "just another day, i'm almost there" trap?

The last time I looked at O'Connell (a few years back), it seemed that he was good on the planning and execution side, but had big gaps in the individual improvement area. If that's changed, I'll revisit his site.

Replies are listed 'Best First'.
Re: Re: Re: On Improving One's Estimates
by astroboy (Chaplain) on Feb 26, 2004 at 01:10 UTC

    These are all valid points. I think this estimation technique does not apply to all projects, but I would like to offer the following feedback:

    "When Dave says X, add 30%." Using the above methodology, the tasks are broken down to a reasonably fine granularity. You certainly want Dave's input, but basically, you run a report against the database and you see how long Dave took last time he did the same task.

    Detailed time recorded in a database allows you to query estimated vs actual time, by person by task, so revisions to the estimation process can be made over time. It also helps when answering your question: "Am I more accurate when estimating some classes of tasks than others?" as you have metrics that will tell you. (I think this is why O'Connell feels that estimates based on past experience only works if you have hard data.)

    RE: "Information gets extracted and shovelled into the gaping maw of Microsoft Project..." To be honest, I'm not a big fan of MS Project. It does help to get an overview of dependencies, and shows what tasks can be done in parallel with the available resources. But I find it unwieldy and cumbersome. So while we use it, it isn't the guiding tool in the project. (I also think that some project managers -- particlularly those who have fallen into the role from non-technical backgrounds -- use it as a crutch in lieu of actual project management. Hey, if you're fiddling with charts then you're justifying your income, right?)

      Using the above methodology, the tasks are broken down to a reasonably fine granularity. You certainly want Dave's input, but basically, you run a report against the database and you see how long Dave took last time he did the same task.

      If you're having developers do the same task twice, without reusing the results from the first round, you're got a problem at a different level.

      But, seriously, the "yesterday's weather" approach to estimating suffers from several problems. For one, the context may be different. Dave may have taken N days to write the last widget because Susan (who knows a lot about widgets) was sitting nearby. Susan has since moved on. Is it now going to take Dave N days to write another widget? Maybe yes, but probably no.

      For another, Dave may have spent a lot of time climbing up the learning curve the first time. What took N days then make take N/2 now.