On Tech

Author: Steve Smith (Page 7 of 11)

Version Control Strategies

A taxonomy of version control strategies for and against Continuous Integration

This series of articles describes a taxonomy for different types of Feature Branching – developers working on branches in isolation from trunk – and how Continuous Integration is impacted by Feature Branching variants.

  1. Organisation antipattern: Release Feature Branching – the what, why, and how of long-lived feature branches
  2. Organisation pattern: Trunk Based Development – the what, why, and how of trunk development
  3. Organisation antipattern: Integration Feature Branching – the what, why, and how of long-lived integration branches
  4. Organisation antipattern: Build Feature Branching – the what, why, and how of short-lived feature branches

Organisation antipattern: Release Feature Branching

The Version Control Strategies series

  1. Organisation Antipattern – Release Feature Branching
  2. Organisation Pattern – Trunk Based Development
  3. Organisation Antipattern – Integration Feature Branching
  4. Organisation Antipattern – Build Feature Branching

Release Feature Branching dramatically increases development costs and risk

Feature Branching is a version control practice in which developers commit their changes to a branch of a source code repository before merging to trunk at a later date. Popularised in the 1990s and 2000s by centralised Version Control Systems (VCS) such as ClearCase, Feature Branching has evolved over the years and is currently enjoying a resurgence in popularity thanks to Distributed Version Control Systems (DVCS) such as Git.

The traditional form of Feature Branching originally promoted by ClearCase et al might be called Release Feature Branching. The central branch known as trunk is considered a flawless representation of all previously released work, and new features for a particular release are developed on a long-lived branch. Developers commit changes to their branch, automated tests are executed, and testers manually verify the new features. Those features are then released into production from the branch, merged into trunk by the developers, and regression tested on trunk by the testers. The branch can then be earmarked for deletion and should only be used for production defect fixes.

Consider an organisation that provides an online company accounts service, with its codebase maintained by a team practicing Release Feature Branching. Two epics – E1 Corporation Tax and E2 Trading Losses – begin development on concurrent feature branches. The E1 branch is broken early on, but E2 is unaffected and carries on regardless.

In month 2, two more epics – E3 Statutory Accounts and E4 Participator Loans – begin. E3 is estimated to have a low impact but its branch is broken by a refactoring and work is rushed to meet the E3 deadline. Meanwhile the E4 branch is broken by a required architecture change and gradually stabilised.

In month 3, E3 is tested and released into production before being merged into trunk and regression tested. The E2 branch becomes broken so progress halts until it is fixed. The E1 branch is tested and released into production before the merge and regression testing of trunk + E3 + E1.

In month 4, E2 is tested and released into production but the subsequent merge and regression testing of trunk + E3 + E1 + E2 unexpectedly fails. While the E2 developers fix trunk E4 is tested and released, and once trunk is fixed the merge and regression testing of trunk + E3 + E1 + E2 + E4 is performed. Soon afterwards a critical defect is found in E4, so a E4.1 fix is also released.

At this point all 4 feature branches could theoretically be deleted, but Corporation Tax changes are requested for E1 on short notice and a trunk release is refused by management due to the perceived risk. The dormant E1 branch is resurrected so E1.1 can be released into production and merged into trunk. While the E1 merge was trunk + E3 the E1.1 merge is trunk + E3 + E2 + E4.1, resulting in a more complex merge and extensive regression testing.

In this example E1, E2, E3, and E4 enjoyed between 1 and 3 months of uninterrupted development, and E4 was even released into production while trunk was broken. However, each period of isolated development created a feedback delay on trunk integration, and this was worsened by the localisation of design activities such as the E3 refactoring and E4 architectural change. This ensured merging and regression testing each branch would be a painful, time-consuming process that prevented new features from being worked on – except E1.1, which created an even more costly and risky integration into trunk.

This situation could have been alleviated by the E1, E2, E3, and/or E4 developers directly merging the changes on other branches into their own branch prior to their production release and merge into trunk. For instance, in month 4 the E4 developers might have merged the latest E1 changes, the latest E2 changes, and the final E3 changes into the E4 branch prior to release.

Martin Fowler refers to this process of directly merging between branches as Promiscuous Integration, and promiscuously integrating E1, E2, and E3 into E4 would certainly have reduced the complexity of the eventual trunk + E3 + E1 + E2 + E4 merge. However, newer E1 and E2 changes could still introduce complexity into that merge, and regression testing E4 on trunk would still be necessary.

The above example shows how Release Feature Branching inserts an enormously costly and risky integration phase into software delivery. Developer time must be spent managing and merging feature branches into trunk, and with each branch delaying feedback for prolonged periods a complex merge process per branch is inevitable. Tester time must be spent regression testing trunk, and although some merge tools can automatically handle syntactic merge conflicts there remains potential for Semantic Conflicts and subtle errors between features originating from different branches. Promiscuous Integration between branches can reduce merge complexity, but it requires even more developer time devoted to branch management and the need for regression testing on trunk is unchanged.

Since the mid 2000s Release Feature Branching has become increasingly rare due to a greater awareness of its costs. Branching, merging, and regression testing are all non-value adding activities that reduce available time for feature development, and as branches diverge over time there will be a gradual decline in collaboration and codebase quality. This is why it is important to heed the advice of Dave Farley and Jez Humble that “you should never use long-lived, infrequently merged branches as the preferred means of managing the complexity of a large project“.

No Release Testing

This series of articles explains why Release Testing – end-to-end regression testing on the critical path – is a wasteful practice that impedes Continuous Delivery and is unlikely to uncover business critical defects.

  1. Organisation Antipattern: Release Testing – introduces the Release Testing antipattern and why it cannot discover defects
  2. Organisation Antipattern: Consumer Release Testing – introduces the consumer-side variant of the Release Testing antipattern
  3. More Releases With Less Risk – describes how releasing smaller changesets more frequently can reduce probability and cost of failure
  4. Release Testing Is Risk Management Theatre – explains why Release Testing is so ineffective, and offers batch size reduction as an alternative

Application antipattern: Hardcoded Stub

A Hardcoded Stub constrains test determinism and execution times

When testing interactions between interdependent applications we always want to minimise the scope of the System Under Test to ensure deterministic and rapid feedback. This is often accomplished by creating a Stub of the provider application – a lightweight implementation of the provider that supplies canned API responses on demand.

For example, consider an ecommerce website with a microservice architecture. The estate includes a customer-facing Books frontend that relies upon a backend Authentication service for user access controls.

Hardcoded Stub - No Stub

As the Authentication service makes remote calls to a third party, an Authentication Stub is supplied to Books for its automated acceptance testing and manual exploratory testing.

Hardcoded Stub - Stub

A common Stub implementation is a Hardcoded Stub, in which provider behaviour is defined at build time and controlled at run time by magic inputs. For the Authentication Stub that would mean a static pool of pre-authenticated users [1], accessed by magic username via the standard Authentication API [2].

Hardcoded Stub - Hardcoded Stub Single Consumer

While the Authentication Stub has the advantage of not requiring any test setup, the implicit Books dependence upon pre-defined Authentication behaviours will impair Books test determinism and execution times:

  • Changes in the Authentication Stub can cause one to many Books tests to fail unexpectedly, increasing rework
  • Adding/removing/updating Authentication behaviours requires a new Authentication Stub release, increasing feedback loops
  • Concurrent test scenarios are constrained by the size of the Authentication Stub user pool, increasing test execution times

An inability to perform concurrent testing will have a significant impact upon lead times – parallel acceptance tests reduce build times, and parallel exploratory tests speed up tester feedback. This problem is exacerbated when multiple consumers rely on the same Hardcoded Stub, such as a Music frontend tested against the same Authentication Stub as the Books frontend. The same pool of pre-authenticated users [1] is offered to both consumers [2 and 3]

Hardcoded Stub - Hardcoded Stub Multiple Consumers

In this situation the simultaneous testing of Books and Music is bottlenecked by the pre-defined capacity of the Authentication Stub, despite their real-world independence. Test data management becomes a key issue, as testers will have to manually coordinate their use of the pre-authenticated users. A Books test could easily impact a Music test or vice versa – for example, a Books tester could accidentally lock out a user about to used by Music. Such problems can easily lead to wait times within the value stream and inflated lead times.

The root cause of these problems is the overly-contextual nature of a Hardcoded Stub. Rather than predicting test scenarios upfront and providing tightly controlled pathways through provider behaviours, a better approach is to use a Configurable Test Stub – a Configurable Test Double primed by different automated tests and/or exploratory testers to compose provider behaviours. This would mean an Authentication Stub with a private, test-only API able to create users in a desired authentication state and return their generated credentials [1a and 2a] before the standard Authenticatino API is used [1b and 2b].

Hardcoded Stub - Configurable Stub Multiple Consumers

By pushing responsibility for Authentication behaviours onto Books and Music, test data management is decentralised and tests become atomic. The Authentication Stub will have a much lower rate of change, Consumer Driven Contracts can be used to safeguard conversation integrity, and both Books and Music can parallelise their test suites to substantially reduce execution times.

A Hardcoded Stub may be an acceptable starting point for testing consumer/provider interactions, but it is unwieldy with a large test suite and unscalable with multiple consumers. A Configurable Test Stub will prevent nondeterministic test results from creeping into consumers and ensure fast feedback.

Organisation antipattern: Passive Disaster Recovery

Passive Disaster Recovery is Risk Management Theatre

When an IT organisation is vulnerable to a negative Black Swan – an extremely low probability, extremely high cost event causing ruinous financial loss – a traditional countermeasure to minimise downtime and opportunity costs is Passive Disaster Recovery. This is where a secondary production environment is established in a separate geographic location to the primary production environment, with every product increment released into Production and Disaster Recovery retained in a cold standby state.

For example, consider an organisation hosting version v1040 of a customer-facing service in its Production environment. In the event of a catastrophic failure, customers should be immediately routed to the Disaster Recovery environment and receive the same quality of service.

Organisation Antipattern - Disaster Recovery Environment - Vision

Regardless of physical/virtual hosting and manual/automated infrastructure provisioning, Passive Disaster Recovery is predicated upon the fundamentally flawed assumption that active and passive environments will be identical at any given point in time. Over time the unused Disaster Recovery environment will suffer from hardware, infrastructure, configuration, and software drift until it consists of Snowflake Servers that will likely require significant manual intervention if and when Disaster Recovery is activated. With negative Black Swan opportunity costs incurred at a rapid pace the entire future of the organisation might be placed in jeopardy.

Organisation Antipattern - Disaster Recovery Environment - Failover Drift

Passive Disaster Recovery remains common due to an industry-wide underestimation of negative Black Swan events. It is easier for an individual or an organisation to appreciate the extremely low probability of a disastrous business event rather than the extremely high opportunity cost, and as a result a Disaster Recovery environment tends to be procured when a business project begins and left to decay into Risk Management Theatre when the capex funding ends.

Continuous Delivery advocates a radically different approach to Disaster Recovery as it is explicitly focussed upon reducing the time, risk, and opportunity cost of delivering high quality services. One of its principles is Bring The Pain Forward – increasing the cadence of high cost, low frequency events to drive down transaction costs – and applying it to Disaster Recovery means moving from passive to active standby via Blue Green Releases and rotating production responsibility between two near-identical environments.

Organisation Antipattern - Disaster Recovery Environment - Blue Green Releases

In the above diagram, the Blue production environment is currently hosting v1040 and the Green environment is being upgraded with v1041. Once v1041 passes its automated smoke tests and manual exploratory tests it is signed off and customers are seamlessly rerouted from Blue to Green. A short period of time afterwards Blue is upgraded in the background and awaits the next production release.

Organisation Antipattern - Disaster Recovery Environment - Green Blue Releases

As well as enabling zero downtime releases and a cheap rollback mechanism, Blue Green Releases provides an effective Disaster Recovery strategy as the standby production environment is always active and in a known good state. If the Green environment suffers a complete outage customers can be switched to the Blue environment with complete confidence, and vice versa.

Organisation Antipattern - Disaster Recovery Environment - Blue Green Failover

By practicing Blue Green Releases an organisation is effectively rehearsing its Disaster Recovery strategy on every production release, and this can lead to advanced practices such as Chaos Engineering , Fault Injection , and Game Days. It requires a continuous investment in hardware and infrastructure, but it will reduce exposure to negative Black Swans and may even offer a strategic advantage over competitors.

Pipeline antipattern: Artifact Promotion

Promoting artifacts between repositories is a poor man’s metadata

Note: this antipattern used to be known as Mutable Binary Location

A Continuous Delivery pipeline is an automated representation of the value stream of an organisation, and rules are often codified in a pipeline to reflect the real-world journey of a product increment. This means artifact status as well as artifact content must be tracked as an artifact progresses towards production.

One way of implementing this requirement is to establish multiple artifact repositories, and promote artifacts through those repositories as they successfully pass different pipeline stages. As an artifact enters a new repository it becomes accessible to later stages of the pipeline and inaccessible to earlier stages.

For example, consider an organisation with a single QA environment and multiple repositories used to house in-progress artifacts. When an artifact is committed and undergoes automated testing it resides within the development repository.

Pipeline Antipattern Artifact Promotion - Development

When that artifact passes automated testing it is signed off for QA, which will trigger a move of that artifact from the development repository to the QA repository. It now becomes available for release into the QA environment.

Pipeline Antipattern Artifact Promotion - QA

When that artifact is pulled into the QA environment and successfully passes exploratory testing it is signed off for production by a tester. The artifact will be moved from the QA repository to the production repository, enabling a production release at a later date.

Pipeline Antipattern Artifact Promotion - Production

A variant of this strategy is for multiple artifact repositories to be managed by a single repository manager, such as Artifactory or Nexus.

Pipeline Antipattern Artifact Promotion - Repository Manager

This strategy fulfils the basic need of restricting which artifacts can be pulled into pre-production and production environments, but its reliance upon repository tooling to represent artifact status introduces a number of problems:

  • Reduced feedback – an unknown artifact can only be reported as not found, yet it could be an invalid version, an artifact in an earlier stage, or a failed artifact
  • Orchestrator complexity – the pipeline runner has to manage multiple repositories, knowing which repository to use for which environment
  • Inflexible architecture – if an environment is added to or removed from the value stream the toolchain will have to change
  • Lack of metrics – pipeline activity data is limited to vendor-specific repository data, making it difficult to track wait times and cycle times

A more flexible approach better aligned with Continuous Delivery is to establish artifact status as a first-class concept in the pipeline and introduce per-binary metadata support.

Pipeline Antipattern Artifact Promotion - Metadata

When a single repository is used, all artifacts reside in the same location alongside their versioned metadata, which provides a definitive record of artifact activity throughout the pipeline. This means unknown artifacts can easily be identified, the complexity of the pipeline orchestrator can be reduced, and any value stream design can be supported over time with no changes to the repository itself.

Furthermore, as the collection of artifact metadata stored in the repository indicates which artifact passed/failed which environment at any given point in time, it becomes trivial to pipeline dashboards that can display pending releases, application cycle times, and where delays are occurring in the value stream. This is a crucial enabler of organisational change for Continuous Delivery, as it indicates where bottlenecks are occurring in the value stream – likely between people working in separate teams in separate silos.

Organisation antipattern: Dual Value Streams

Dual Value Streams conceal transaction and opportunity costs

The goal of Continuous Delivery is to optimise cycle time in order to increase product revenues, and cycle time is measured as the average lead time of the value stream from code checkin to production release. This was memorably summarised by Mary and Tom Poppendieck as the Poppendieck Question:

“How long would it take your organization to deploy a change that involves just one single line of code? Do you do this on a repeatable, reliable basis?”

The Poppendieck Question is an excellent lead-in to the Continuous Delivery value proposition, but the problem with using it to assess the cycle time of an organisation yet to adopt Continuous Delivery is there will often be two very different answers – one for features, and one for fixes. For example, consider an organisation with a quarterly release cycle. The initial answer to the Poppendieck Question would be “90 days” or similar. Dual Value Streams -  Feature Value Stream However, when the transaction cost of releasing software is disproportionately high a truncated value stream will often emerge for production defect fixes, in which value stream activities are deliberately omitted to slash cycle time. This results in Dual Value Streams – a Feature Value Stream with a cycle time of months, and a Fix Value Stream with a cycle time of days. If our example organisation can release a defect fix in a few days, the correct answer to the Poppendieck Question becomes “90 days or 3 days”. Dual Value Streams - Fix Value Stream Fix Value Streams exist because production defect fixes have a clear financial value that is easily communicated and outweighs the high transaction cost of Feature Value Streams. An organisation will be imbued with a sense of urgency, as a sunk cost has demonstrably been incurred and by releasing a fix faster an opportunity cost can be reduced. People in siloed teams will collaborate upon a fix, and by using a minimal changeset it becomes possible to reason about which value stream activities can be discarded e.g. omitting capacity testing for a UI fix.

Dual Value Streams is an organisational antipattern because it is a local optimisation with little overall benefit to the organisation. There has been an investment in a release mechanism with a smaller batch size and a lower transaction cost, but as it is reserved for defect fixes it cannot add new customer value to the product. The long-term alternative is for organisations to adopt Continuous Delivery and invest in a single value stream with a minimal overall transaction cost. If our example organisation folded its siloed teams into cross-functional teams and moved activities off the critical path a fortnightly release cycle would become a distinct possibility. Dual Value Streams - Value Stream Dual Value Streams is an indicator of organisational potential for Continuous Delivery. When people are aware of the opportunity costs associated with releasing software as well as the transaction costs they are more inclined to work together in a cross-functional manner. When changesets contain a small number of changes it becomes easier to collectively reason about which value stream activities are useful and which should be moved off the critical path or retired.

Furthermore, a Fix Value Stream implicitly validates the use of smaller batch sizes as a risk reduction strategy. Defect fixes are released in small changes to minimise both opportunity costs and the probability of any further errors. Given that strategy works for fixes, why not release features more frequently and measure an organisation against a value-centric Poppendieck Question?

“How long would it take your organization to release a single value-adding line of code? Do you do this on a repeatable, reliable basis?”

Pipeline pattern: Analysis Stage

Separate out analysis to preserve commit stage processing time

The entry point of a Continuous Delivery pipeline is its Commit Stage, and as such manages the compilation, unit testing, analysis, and packaging of source code whenever a change is committed to version control. As the commit stage is responsible for identifying defective code it represents a vital feedback loop for developers, and for that reason Dave Farley and Jez Humble recommend a commit stage that is “ideally less than five minutes and no more than ten” – if the build process is too slow or non-deterministic, the pace of development can soon grind to a halt.

Both compilation and unit testing tasks can be optimised for performance, particularly when the commit stage is hosted on a multi-processor Continuous Integration server. Modern compilers require only a few seconds for compilation, and a unit test suite that follows the Michael Feathers strategy of no database/filesystem/network/user interface access should run in parallel in seconds. However, it is more difficult to optimise analysis tasks as they tend to involve third-party tooling reliant upon byte code manipulation.

When a significant percentage of commit stage time is consumed by static analysis tooling, it may become necessary to trade-off unit test feedback against static analysis feedback and move the static analysis tooling into a separate Analysis Stage. The analysis stage is triggered by a successful run of the commit stage, and analyses the uploaded artifact(s) and source code in parallel to the acceptance testing stage. If a failure is detected the relevant pipeline metadata is updated and Stop The Line applies. That artifact cannot be used elsewhere in the pipeline and further development efforts should cease until the issue is resolved.

For example, consider an organisation that has implemented a standard Continuous Delivery pipeline. The commit stage has an average processing time of 5 minutes, of which 1 minute is spent upon static analysis.

Over time the codebase grows to the extent that commit stage time increases to 6 minutes, of which 1 minute 30 seconds is spent upon static analysis. With static analysis time growing from 20% to 25% the decision is made to create a separate Analysis stage, which reduces commit time to 4 minutes 30 seconds and improves the developer feedback loop.

Static analysis is the definitive example of an automated task that periodically needs human intervention. Regardless of tool choice there will always be a percentage of false positives and false negatives, and therefore a pipeline that implements an Analysis Stage must also offer a capability for an authenticated human user to override prior results for one or more application versions.

Organisation pattern: Trunk Based Development Branching

Trunk Based Development supports Optimistic and Pessimistic Release Branching

Trunk Based Development is a style of software development in which all developers commit their changes to a single shared trunk in source control, and every commit yields a production-ready build. It is a prerequisite for Continuous Delivery as it ensures that all code is continuously integrated into a single workstream, that developers always work against the latest code, and that merge/integration pain is minimised. Trunk Based Development is compatible with a Release Branching strategy of short-lived release branches that are used for post-development defect fixes. That strategy might be optimistic and defer branch creation until a defect occurs, or be pessimistic and always create a release branch.

For example, consider an application developed using Trunk Based Development. The most recent commits to trunk were source revisions a and b which yielded application versions 610 and 611 respectively, and version 610 is intended to be the next production release.

Trunk Based Development Branching - Optimistic Release Branching

With Optimistic Release Branching, the release of version 610 is immediate as there is no upfront branching. If a defect is subsequently found then a decision must be made where to commit the fix, as trunk has progressed since 610 from a to b. If the risk of pulling forward from a to b is acceptable then the simple solution is to commit the fix to trunk as c, and consequently release version 612.

Trunk Based Development Branching - Optimistic Release Branching Low Risk Defect

However, if the risk of pulling forward from a to b is unacceptable then a 610.x release branch is created from a, with the fix committed to the branch as c and released as version 610.1. That fix is then merged back into trunk as d to produce the next release candidate 612, and the 610.x branch is earmarked for termination.

Trunk Based Development Branching - Optimistic Release Branching High Risk Defect

With Pessimistic Release Branching, the release of version 610 is accompanied by the upfront creation of a 610.x release branch in anticipation of defect(s). If a defect is found in version 610 then as with Optimistic Branching a decision must be made as to where the defect fix should be committed. If the risk of pulling forward from a to b is deemed insignificant then trunk can be pulled forward from a to b and the fix committed to trunk as c for release as version 612. The 610.x branch is therefore terminated without ever being used.

Trunk Based Development Branching - Pessimistic Release Branching Low Risk Defect


If on the other hand the risk is deemed significant then the fix is committed to the 610.x branch as c and released as version 610.1. The fix is merged back into trunk as d and version 612, which will also receive its own branch upon release.

Trunk Based Development Branching - Pessimistic Release Branching High Risk Defect

The choice between Optimistic Branching and Pessimistic Branching for Trunk Based Development is dependent upon product quality and lead times. If product quality is poor and lead times are long, then the upfront cost of Pessimistic Branching may be justifiable. Alternatively, if post-development defects are rare and production releases are frequent then Optimistic Branching may be preferable.

Announcing the Build Quality In book

Continuous Delivery and DevOps experience reports from the wild

I am delighted to announce Matthew Skelton and I are co-authors of a new book Build Quality In – a collection of Continuous Delivery and DevOps experience reports from the wild.

The interdependent disciplines of Continuous Delivery and DevOps can transform the fortunes of an organisation, but they are hard. Matthew and I want to help people on their Continuous Delivery and/or DevOps journey, by sharing the experiences of those who have done it – what worked, what didn’t, and the highs and lows of trying to build quality into an organisation.

Build Quality In will be an anthology of first-person narratives from Continuous Delivery and DevOps practitioners. We have an incredible group of contributors who have agreed to give up their own time to tell their stories, with Dave Farley writing the Continuous Delivery foreword. Further announcements on contributors and a DevOps foreword are pending.

Matthew and I are donating 70% of author royalties for Build Quality In to Code Club – a not-for-profit organisation that runs a UK-wide network of free volunteer-led after-school coding clubs for children aged 9-11. We passionately believe that diversity within the IT industry must improve, and we want to contribute in some small way.

Right now we are in our launch phase. If you are interested in learning more about Build Quality In, please visit our Leanpub landing page. If you are interested in purchasing our book, please register on the Leanpub landing page and indicate how much you would be willing to pay. Over the next few weeks the first contributors will submit their draft articles, and we will announce publication via Build Quality In on Twitter and a pending book website.

Register your interest

« Older posts Newer posts »

© 2025 Steve Smith

Theme by Anders NorénUp ↑