Accidental complication: another reason why estimates don’t work

Accidental complication: another reason why estimates don’t work 1 rating with average of 5.00/5

Accidental complication
Another reason why estimations can’t work goes by the name of accidental complication. J.B. Rainsberger introduced this concept at Oredev 2013 .
The gist of the argument is this: every feature or functionality added to an existing project has two cost elements (what you want to estimate):

  • Essential complication or g(e): How hard a problem is on its own. For example, implementing tax handling is hard, because the tax code is complex in itself. That is what J.B. Rainsberger calls essential complication or g(e).
  • Accidental complication or h(a): The complication that creeps into to the work because – as J.B. puts it – “we suck at our jobs”. Or more diplomatically, the complication that comes from our organizational structures (how long does it take to get the approval for a new test environment?) and from how programs are written (no one is perfect, therefore some things need to be changed to accommodate the new functionality).

The cost of a feature is a function of both essential and accidental complication:

cost of feature = f(g(e), h(a))

In software, estimates of cost are often done by analogy. If features A and B are similar, then if feature A took 3 weeks, feature B will take 3 weeks. But then J.B. asks: “when was the last time that you worked on a perfect code base, where the cost of the accidental complication (how messy the code base was) was either zero or the same multiple of g(e) as when you implemented feature A?

He concludes – and I agree – that often the cost of accidental complication (dealing with organizational and technical issues specific to that code, and that organization) dominates (yes, dominates!) the cost of a feature. So, if h(a) is much larger than g(e) the cost of a feature cannot be determined by relative estimation – which is the most common estimation approach in teams that use, for example, Story Point estimation. What this means for you is: you can only determine the real cost of a feature by recognizing and measuring accidental complication, or –like I suggest in this book – by using the #NoEstimates approach.

Learn more about #NoEstimates

Don’t leave your deliveries to chance. Estimates are a percentage game, but we can improve our projects by orders of magnitude both in the value-delivery aspect as well as the “on time” delivery.

If you have not yet seen the video don’t miss it!

7 thoughts on “Accidental complication: another reason why estimates don’t work

  1. This argument seems to be less against estimates and more against the approach to estimates which assumes that a programmer or group of programmers can pull accurate estimates out of a hat. As long as the environment isn’t completely chaotic, historic data should encompass most of the variables you characterize as accidental complications. Probabilistic forecasting using historic data could provide useful estimates that take the system and it’s foibles into account.

    • Thank you for the comment Paul.

      I do agree that Probabilistic Forecasting would account for some of the inherent variability related to accidental complexity.
      The last point you make is very important: “take the system into account”.

      That’s exactly what I advocate in my approach to NoEstimates: consider the system behavior and assume the system will continue to behave the same way if the stability rules are not broken (more on that at NoEstimatesBook.com).

  2. “Another reason why estimations can’t work….”

    Please, no. Estimates can and do work. I just see so much get in the way of them working in my clients’ typical environment, that I recommend against them in most cases.

    My video says estimates don’t work if you don’t refactor, but clearly if you refactor, then estimates might work. Other things also have to go right, but that can happen.

    If a team stays together long enough, stays with the code base long enough, gets to improve the code base long enough, has stable-enough stakeholders, and has the experience of not having estimates used to beat them over the head long enough, then estimates can cope with the usual volatility that accidental complication introduces. That’s a lot of “if”s, but I wouldn’t call it impossible.

    At most, Vasco, “Another reason why estimations usually don’t work…”. Please.

    • Thanks for the comment JB. Estimation is consistently and unpredictably off.
      In some companies I’ve worked at estimates were off by 62% on average, but ranged from nearly 0% to +250% off. I don’t think that qualifies as “working”.

      I advise everyone to reduce their reliance on estimates for software development, and instead use forecasting as a way to evaluate progress against (what is typically) a fixed date for release.

      Another aspect that I urge people to consider is that one team’s on time delivery may have no positive impact on the overall delivery to the customer. In one project where I worked all the teams delivered “on time” (within an acceptable deviation from the original schedule), but the whole project was a total failure and eventually got cancelled, because the whole product (or system) was not ready on time.

      In the end what we should follow is the delivery of increments for the entire system. And at that level your arguments from the video take an even more important role. In fact it is possible for one team to have a “good” code base, while the system (all teams) has large problems.

      In my experience, which I share in more detail at NoEstimatesBook.com, estimation is only right by accident and should be relied upon to make important/critical decisions.

Leave a Reply

Your email address will not be published. Required fields are marked *