On Tech

Tag: Risk (Page 2 of 2)

Release more with less

Continuous Delivery enables batch size reduction

Continuous Delivery aims to overcome the large delivery costs traditionally associated with releasing software, and in The Principles of Product Development Flow Don Reinertsen describes delivery cost as a function of transaction cost and holding cost. While transaction costs are incurred by releasing a product increment, holding costs are incurred by not releasing a product increment and are proportional to batch size – the quantity of in flight value-adding features, and the unit of work within a value stream.

Economic Batch Size [Reinertsen]

The above graph shows that a reduction in transaction cost alone will not dramatically impact delivery cost without a corresponding reduction in batch size, and this mirrors our assertion that automation alone cannot improve cycle time. However, Don also states that “the primary controllable factor that enables small batches is low transaction cost per batch“, and by implementing Continuous Delivery we can minimise transaction costs and subsequently release smaller change sets more frequently, obtaining the following benefits:

  1. Improved cycle time – smaller change sets reduce queued work (e.g. pending deployments), and due to Little’s Law cycle time is decreased without constraining demand (e.g. fewer deployments) or increasing capacity (e.g. more deployment staff)
  2. Improved flow – smaller change sets reduce the probability of unpredictable, costly value stream blockages (e.g. multiple deployments awaiting signoff)
  3. Improved feedback – smaller change sets shrink customer feedback loops, enabling product development to be guided by Validated Learning (e.g. measure revenue impact of new user interface)
  4. Improved risk – smaller change sets reduce the quantity of modified code in each release, decreasing both defect probability (i.e. less code to misbehave) and defect cost (i.e. less complex code to debug and fix)
  5. Improved overheads – smaller change sets reduce transaction costs by encouraging optimisations, with more frequent releases necessitating faster tooling (e.g. multi-core processors for Continuous Integration) and streamlined processes (e.g. enterprise-grade test automation)
  6. Improved efficiency – smaller change sets reduce waste by narrowing defect feedback loops, and decreasing the probability of defective code harming value-adding features (e.g. user interface change dependent upon defective API call)
  7. Improved ownership – smaller change sets reduce the diluted sense of responsibility in large releases, increasing emotional investment by limiting change owners (e.g. single developer responsible for change set, feedback in days not weeks)

Despite the business-facing value proposition of Continuous Delivery, there may be no incentive from the business team to increase release cadence. However, the benefits of releasing smaller change sets more frequently – improved feedback, risk, overheards, efficiency, and ownership – are also operationally advantageous, and this should be viewed as an opportunity to educate those unaware of the power of batch size reduction. Such a scenario is similar to the growth of Continuous Integration a decade ago, when the operational benefits of frequently integrating smaller code changes overcame the lack of business incentive to increase the rate of source code integration.

Business requirements = minimum release cadence
Operational requirements = maximum release cadence

A persistent problem with increasing release cadence is Eric Ries’ assertion that “the benefits of small batches are counter-intuitive“, and in organisations long accustomed to a high delivery cost it seems only natural to artificially constrain demand or increase capacity to smooth the value stream. For example, our organisation has a 28 day cycle time of which 7 days are earmarked for release testing. In this situation, decreasing cadence to a 36 day cycle time appears less costly than increasing cadence to a 14 day cycle time, as release testing will ostensibly decrease to 19% of our cycle time rather than increase to 50%. However, this ignores both the holding cost of constraining demand and the long-unimplemented optimisations we would be compelled to introduce to achieve a lower release cadence (e.g. increased level of test automation).

Improving cycle time is not just about using Continuous Delivery to reduce transaction costs – we must also be courageous, and release more with less.

Updating a Pipeline

Pipeline updates must minimise risk to protect the Repeatable Reliable Process

We want to quickly deliver new features to users, and in Continuous Delivery Dave Farley and Jez Humble showed that “to achieve these goals – low cycle time and high quality – we need to make frequent, automated releases“. The pipeline constructed to deliver those releases should be no different and frequently, automatically released into Production itself. However, this conflicts with the Continuous Delivery principle of Repeatable Reliable Process – a single application release mechanism for all environments, used thousands of times to minimise errors and build confidence – leading us to ask:

Is the Repeatable Reliable Process principle endangered if a new pipeline version is released?

To answer this question, we can use a risk impact/probability graph to assess if an update will significantly increase the risk of a pipeline operation becoming less repeatable and/or reliable.

Pipeline Risk

This leads to the following assessment:

  1. An update is unlikely to increase the impact of an operation failing to be repeatable and/or reliable, as the cost of failure is permanently high due to pipeline responsibilities
  2. An update is unlikely to increase the probability of an operation failing to be repeatable, unless the Published Interface at the pipeline entry point is modified. In that situation, the button push becomes more likely to fail, but not more costly
  3. An update is likely to increase the probability of an operation failing to be reliable. This is where stakeholders understandably become more risk averse, searching for a suitable release window and/or pinning a particular pipeline version to a specific artifact version throughout its value stream. These measures may reduce risk for a specific artifact, but do not reduce the probability of failure in the general case

Based on the above, we can now answer our original question as follows:

A pipeline update may endanger the Repeatable Reliable Process principle, and is more likely to impact reliability than repeatability

We can minimise the increased risk of a pipeline update by using the following techniques:

  • Change inspection. If change sets can be shown to be benign with zero impact upon specific artifacts and/or environments, then a new pipeline version is less likely to increase risk aversion
  • Artifact backwards compatibility. If the pipeline uses a Artifact Interface and knows nothing of artifact composition, then a new pipeline version is less likely to break application compatibility
  • Configuration static analysis. If each defect has its root cause captured in a static analysis test, then a new pipeline version is less likely to cause a failure
  • Increased release cadence. If the frequency of pipeline releases is increased, then a new pipeline version is more likely to possess shallow defects, smaller feedback loops, and cheaper rollback

Finally, it is important to note that a frequently-changing pipeline version may be a symptom of over-centralisation. A pipeline should not possess responsibility without authority and should devolve environment configuration, application configuration, etc. to separate, independently versioned entities.

Newer posts »

© 2024 Steve Smith

Theme by Anders NorénUp ↑