In many people’s minds capacity still means “how many man-hours we have have available for real work”. This is plain wrong.
Let’s decompose this assumption to see how wrong it is.
- First, in this assumption is implicit that we can estimate exactly how many man-hours we have available for “real work”. The theory goes like this: I have 3 people, the sprint is 2 weeks/10 days, so the effort available, and therefore capacity, is 30 man-days. This is plain wrong!!! How? let’s see:
- Not all three people will be doing the same work. So, even if you have a theoretical maximum of 30 man-days available not all people can do the same work. If, for example, 1 person would be an analyst, another a programmer and finally the third a tester, then that would leave us with effectively 10 man-days of programming, analysis and testing effort available. Quite different from 30 man-days!
- Then there’s the issue that not 100% of the time available for each people can actually be used for work. For example, there are meetings about next sprint’s content, then there’s interruptions, time to go to the toilet… You get the picture. In fact it is impossible to predict how many each person will use for “real” work.
- Then there are those pesky things we call “dependencies”. Sometimes someone in the team is idle because they depend on someone else (in or out of the team) and can’t complete their feature. This leads to unpredictable delays, and therefore ineffective use of the effort available for a Sprint.
- Finally (although other reasons can be found) there’s the implicit assumption that even if we could know perfectly the amount of effort available we can know how much some piece work actually takes from beginning to end, in exact terms! This is implicit in how we use the effort numbers by then scheduling features against that available effort. The fact is that we (humans) are very bad at estimating something we have not done before, which is the case in software most of the time.
The main message here is: effort available (e.g. man-hours) is not the same as capacity. Capacity is the metric that tells us how many features a team or a group of teams can deliver in a Sprint, not the available effort!
Implications of this definition of capacity
There are some important implications of the above statement. If we recognize that capacity is closer to the traditional definition of Throughput then we understand that what we need to estimate is not just size of a task, plus effort available. No, it’s much more complex than that! We need to estimate the impact of dependencies, errors, meetings, etc. on the utilization of the effort available.
Let me illustrate how complex this problem is. If you want to empty a tank of 10 liters attached to a pipe, you will probably want to know how much water can flow through the pipe in 1 minute (or some similar length of time) and then calculate how long it takes to completely empty the tank. Example: if 1 liter flows through the pipe in 1 minute then it will take 10 minutes to empty a 10 liter tank. Easy, no?
Well, what if you now try to guess the time to empty the same tank but instead of being given the metric that 1 liter of water can flow in the piper for each minute, you are instead given:
- Diameter of the pipe
- Material that the pipe is built in
- Viscosity of the liquid in the tank
- Probability of obstacles existing in the pipe that could impede the flow of the liquid
- Turbulence equations that allow you to calculate flow when in the presence of an obstacle
Get the point? In software we are in the second situation! We are expected to calculate capacity (which is actually throughput) given a huge list of variables! How silly is that?!
For a better planning and estimating framework
The fact is that the solution for the capacity (and therefore planning) problem in software is much, much easier!
Here’s a blow by blow description:
- Collect a list of features for your product (not all, just the ones you really want to work on)
- With the whole team, assess the features to make sure that neither of them is “huge” (i.e. the team is clueless about what it is or how to implement it). If they are too large, split them in half (literally). Try to get all features to fit into a sprint (without spending a huge effort in this step).
- Spend about 3 sprints working on that backlog (it pays off to have shorter sprint!)
- After 3 sprints look at your velocity (number of features completed in each sprint) and calculate an average
- Use the average velocity to tell the Product Owner how long it will take to develop the product they want based on the number of Features available in the backlog
- Update the average expected velocity after each sprint
Does it sound simple? It is. But most importantly, I have done many experiments based on this simple idea, and I’m yet to find a project where this would not apply. Small or big, it worked for all projects where I’ve been involved in the past.
The theory behind
There’s a couple of principles behind this strategy. First, it’s clear that if you don’t change the team, the technology or any other relevant environmental variable the team will perform at a similar level (with occasional spikes or dips) all the time. Therefore you can use historical velocity information to calculate a long term velocity in the future!
Second, you still have to estimate, but you do that at the level of one Sprint. The reason is that even if we have the average velocity, an average does not apply to a single sprint but rather to a set of sprints. Therefore you still need to plan your sprint, to identify possible bottlenecks, coordinate work with other people, etc.
Finally the benefits. The most important benefit here is that you don’t depend on estimations based on unknown assumptions to make your long term plans. You can rely on data. Sure, sometimes the data will be wrong, but compare with the alternative: when was the last time you saw a plan hold up? Data is your friend!
Another benefit is that you don’t need to twist anyone’s arm to produce the metrics needed (velocity and number of features in backlog), because those metrics are calculated automatically by the simple act of using Scrum.
All in all not a bad proposition and much simpler than working with unavoidably incorrect estimates that leave everybody screaming to be saved by the bell at the end of the planning meeting!