# Estimates And Actuals Are Not Bounded Evenly On Both Sides

A while ago I read Steve McConnel’s latest book, Software Estimation: Demystifying the Black Art, which is a fantastic treatise on the “Black Art” of software estimation.

One of the key discoveries the book highlights is just how bad people are at estimation, especially single point estimation.

One of several techniques given in the book focuses on providing three estimation points for every line item.

1. Best Case: If everything goes well, nobody gets sick, the sun shines on your face, how quickly could you get this feature complete?
2. Worst Case: If your dog dies, your significant other leaves you, and your brain turns to mush, what is the absolute longest time it would take to get this done? In other words, there is no way on Earth it would take longer than this time, unless you were shot.
3. Nominal Case: This is your best guess, based on your years of experience with building this type of widget. How long do you really think it will take?

The hope is that when development is complete, you’ll find that the actual time spent is between your best case and worst case. McConnell provides a quiz you can try out to discover that this is harder than it sounds.

Over time, as you reconcile your actual times into your past estimates, you’ll be able to figure out what I call your estimation batting average, a number that represents how accurate your estimates tend to be.

Once you have these three points for a given estimate, you can apply some formulas and your estimation batting average to create a probability distribution of when you might complete the project. Here is a simple example of what that might look like (though in real life there may be more point values).

• 20% 50 developer days
• 50% 70 developer days
• 80% 90 developer days

So the numbers above show that there’s only a 20% chance the project will be complete within 50 developer days and an 80% chance of completion if the development team is given 90 developer days.

This technique showcases the uncertainty involved in creating estimates and focuses on the probability that estimates really represent.

After reading this book, I fired up Excel and built a nice spreadsheet with the formulas in the book and columns for these three estimation points. Now I can simply enter my line items, plug in my best, worst, and nominal cases, and out pops a probability distribution of when the project will be complete.

However, as I mentioned before, the crux of this technique relies on that estimation batting average. But when you’re just starting out, you have no idea what that average is, so you have to pull it out of the air (I recommend pulling conservatively).

The reason I bring this all up is that I watched an interesting interview today on the ScobleShow. Robert Scoble interviewed FogCreek founder and well known technology blogger, Joel Spolsky.

Joel let it be known that they are building a new scheduling feature for FogBugz 6 that reflects the reality of software estimation better than typical scheduling software.

For example, one key observation he makes is that estimates tend to be much shorter than the actual time than they are longer.

For example, it’s quite common to estimate that a feature will take two days, only to have it take four days, or eight days. But it’s rare that the feature actually ends up taking one day. Obviously it’s impossible for that feature to take 0 days or -4 days.

This makes obvious sense when you think about it.

The amount by which you can finish a feature before an estimated time is constrained, but the amount of time that you can overshoot an estimate is boundless.

Yet many software scheduling software completely ignore this fact, hoping that an underestimation on one item will be offset by an overestimation of another. They assume these over and under estimates are balanced, which they are clearly not.

This new feature will attempt to take that into account as well as your track record for estimates (your batting average if you will), and provide a probablity of completion for various dates.

Sounds like a brilliant idea! If done well, that would be quite hot and allow me to chuck my hackish Excel spreadsheet.

Found a typo or error? Suggest an edit! If accepted, your contribution is listed automatically here.

### 4 responses

1. March 30th, 2007

Hi Phill,
great post, hope you write more on this topic.
thanks.

2. March 30th, 2007

Isn't this the basis for team/developer velocities in Agile development? To turn a developers estimate into something more like an actual figure.

3. March 30th, 2007

@Dave, in a way it's similar. With XP, you rate every user story with a business value score (usually 1 to 5) and a technical difficulty score (1 to 5 as well). Over time, you'll start to get a picture of how much "Business Value" your team can deliver per week.
That approach gives you a number that's called your Velocity.
This approach is slightly different in that we're not scoring features according some scale. Instead, we're looking at how accurate our past estimates have been. So rather than saying, "My team delivers 20 pts of business value a week", you can say, "My team's estimates are within their min/max ranges 90% of the time." You might also be able to say how wide that range is as well "1 standard deviation".
Either way, the key point is to use past performance to create more quantitative estimates rather than relying on pure subjectivity.

4. April 17th, 2007

The last post about Velocity is incorrect. Velocity does not measured in business value, velocity measured in abstract effort units. Maybe you mean than BV *is* abstract effort unit, but you mixing terms here for sure.
Abstract unit has nothing common with BV. The idea is in "estimate-by-compare". For example, you may say that login page is 1 point, and then compare all the other features with "login page". For example, "As project manager I want to see all project members" feature may be 3 points since it 3-times harder than "login page". That's the main idea in abstract effort units. People great in comparative estimates, not in absolute.