On Tech

Tag: No Projects

Rebuild The Thing Wrong

What happens when the launch of a software product rebuild is delayed due to feature parity? How can an organisation rebuild a software product the right way, and minimise opportunity costs as well as technology risks?

Introduction

To adapt to rapidly changing markets, organisations must enhance their established products as well as exploring new product offerings. When the profitability of an existing product is limited by Discontinuous Delivery, a rewrite should be considered a last resort. However, it might become necessary if a meaningful period of code, configuration, and infrastructure rework cannot achieve the requisite deployment throughput and production reliability.

Rebuilding a software product from scratch is usually a costly and risky one-off project, with little attention paid to customer needs. Replicating all the features in an existing product will take weeks or months, and there may be a decision to wait for feature parity before customer launch. It can be a while before Continuous Delivery becomes a reality.

The production cutover will be a big bang migration, with a high risk of catastrophic failure. The probability and cost of cutover failure can be reduced by Decoupling Deploy From Launch, and regularly deploying versions of the new product prior to launch.

The Gamez rebuild

For example, a fictional Gamez retail organisation has an ecommerce website. The website is powered by a monolithic application known as eCom, which is hosted in an on-premise data centre. Recently viewed items, gift lists, search, and customer reviews are features that are often praised by customers, and believed to generate the majority of revenue. eCom deployments are quarterly, production reliability is poor, and data centre costs are too high.

There is a strong business need to accelerate feature delivery, and a rewrite of the Gamez website is agreed. The chosen strategy is a rip and replace, in which the eCom monolith will be rebuilt as a set of cloud-native services, hosted in a public cloud provider. Each service is intended to be an independently testable, deployable, and scalable set of microservices, that can keep pace with market conditions.

There is an agreement to delay customer launch until feature parity is achieved, and all eCom business logic is replicated in the services. Months later, the services are deemed feature complete, the production cutover is accomplished, and the eCom application is turned off.

The feature parity fallacy

Feature parity is a fallacy. Rebuilding a software product with the same feature set is predicated on the absurd assumption that all existing features generate sufficient revenue to justify their continuation. In all likelihood, an unknown minority of existing features will account for a majority of revenue, and the majority of existing features can be deferred or even discarded.

The main motivation for rebuilding a software product is to accelerate delivery, validate product hypotheses, and increase revenues as a result. Waiting weeks or months for feature parity during a rebuild prolongs Discontinuous Delivery, and exacerbates all the related opportunity costs that originally prompted the rewrite.

Back at Gamez, some product research conducted prior to the rebuild is unearthed, and it contains some troubling insights. Search generates the vast majority of website revenue. Furthermore, product videos is identified as a missing feature that would quickly surpass all other revenue sources. This means a more profitable course of action would have been to launch a new product videos feature, replace the eCom search feature, and validate learnings with customers. An opportunity to substantially increase profitability was missed.

Strangler Fig

Rebuilding a software product right means incrementally launching features throughout the rebuild. This can be accomplished with the Strangler Fig architectural pattern, which can be thought of as applying the Expand and Contract pattern at the application level.

Strangler Fig means gradually creating a new software product around the boundaries of an existing product, and rebuilding one feature at a time until the existing product is no more. It is usually implemented by deploying an event interceptor in front of the existing product. New and existing features are added to the new software product as necessary, and user requests are routed between products. This approach means customers can benefit from new features much sooner, and many more learning opportunities are created.

Using the Strangler Fig pattern would have allowed Gamez to generate product videos revenue from the outset. At first, a customer router microservice would have been implemented in front of the eCom application. It would have sent all customer requests to eCom bar product videos, which would have gone to the first microservice. Later on, search and customer review services would have been replaced the same eCom functionality.

Case studies

  1. The Road To Continuous Delivery by Michiel Rook details how Strangler Fig helped a Dutch recruitment company replace its ecommerce website
  2. Strangulation Pattern Of Choice by Joshua Gough describes how Strangler Fig allowed a British auction site to replace its ecommerce website
  3. Legacy Application Strangulation Case Studies by Paul Hammant lists a series of product rewrites made possible by Strangler Fig
  4. How to Breakthrough the Old Monolith by Kyle Galbraith visualises how to slice a monolith up into microservices, using Strangler Fig
  5. Estimation Is Evil by Ron Jeffries mentions how incremental launches would have benefitted the eXtreme Programming (XP) birth project Chrysler C3, rather than a yearlong pursuit of feature parity

Build The Right Thing and Build The Thing Right

Should an organisation in peril start its journey towards IT enabled growth by investing in IT delivery first, or product development? Should it Build The Thing Right with Continuous Delivery first, or Build The Right Thing with Lean Product Development?

Introduction

The software revolution has caused a profound economic and technological shift in society. To remain competitive, organisations must rapidly explore new product offerings as well as exploiting established products. New ideas must be continuously validated and refined with customers, if product/market fit and repeatable sales are to be found.

That is extremely difficult when an organisation has the 20th century, pre-Internet IT As A Cost Centre organisational model. There will be a functionally segregated IT department, that is accounted for as a cost centre that can only incur costs. There will be an annual plan of projects, each with its scope, resources, and deadlines fixed in advance. IT delivery teams will be stuck in long-term Discontinuous Delivery, and IT executives will only be incentivised to meet deadlines and reduce costs.

If product experimentation is not possible and product/market fit for established products declines, overall profitability will suffer. What should be the first move of an organisation in such perilous conditions? Should it invest first in Lean Product Development to Build The Right Thing, and uncover new product offerings as quickly as possible? Or should it invest in Continuous Delivery to Build The Thing Right first, and create a powerful growth engine for locating future product/market fit?

Build The Thing Right first

In 2007, David Shpilberg et al published a survey of ~500 executives in Avoiding the Alignment Trap in IT. 74% of respondents reported an under-performing, undervalued IT department, unaligned with business objectives. 15% of respondents had an effective IT department, with variable business alignment, below average IT spending, and above average sales growth. Finally, 11% were in a so-called Alignment Trap, with negative sales growth despite above average IT spending and tight business alignment.

Avoiding the alignment trap in IT

Avoiding the Alignment Trap in IT (Shpilberg et al) – source praqma.com

Shpilberg et al report “general ineffectiveness at bringing projects in on time and on the dollar, and ineffectiveness with the added complication of alignment to an important business objective”. The authors argue organisations that prematurely align IT with business objectives will fall into an Alignment Trap. They conclude organisations should build a highly effective IT department first, and then align IT projects to business objectives. In other words, Build The Thing Right before trying to Build The Right Thing.

The conclusions of Avoiding the Alignment Trap in IT are naive, because they ignore the implications of IT As A Cost Centre. An organisation with ineffective IT will undoubtedly suffer if it increases business alignment and investment in IT as is. However, that is a consequence of functionally siloed product and IT departments, and the antiquated project delivery model tied to IT As A Cost Centre. Projects will run as a Large Batch Death Spiral for months without customer feedback, and invariably result in a dreadful waste of time and money. When Shpilberg et al define success as “delivered projects with promised functionality, timing, and cost”, they are measuring manipulable project outputs, rather than customer outcomes linked to profitability.

Build The Right Thing first

It is hard to Build The Right Thing without first learning to Build The Thing Right, but it is possible. If flow is improved through co-dependent product and IT teams, the negative consequences of IT As A Cost Centre can be reduced. New revenue streams can be unlocked and profitability increased, before a full Continuous Delivery programme can be completed.

An organisation will have a number of value streams to convert ideas into product offerings. Each value stream will have a Fuzzy Front End of product and development activities, and a technology value stream of build, testing, and operational activities. The time from ideation to customer is known as cycle time.

Flow in a value stream can be improved by:

  • understanding the flow of ideas
  • reducing batch sizes
  • quantifying value and urgency

The flow of ideas can be understood by conducting a Value Stream Mapping with senior product and IT stakeholders. Visualising the activities and teams required for ideas to reach customers will identify an approximate cycle time, and the sources of waste that delay feedback. A Value Stream Mapping will usually uncover a shocking amount of rework and queue times, with the majority in the Fuzzy Front End.

For example, a Value Stream Mapping might reveal a 10 month cycle time, with 8 months spent on ideas bundled up as projects in the Fuzzy Front End, and 2 months spent on IT in the technology value stream. Starting out with Build The Thing Right would only tackle 20% of cycle time.

Reducing batch sizes means unbundling projects into separate features, reducing the size of features, and using Work In Process (WIP) Limits across product and IT teams. Little’s Law guarantees distilling projects into small, per-feature deliverables and restricting in-flight work will result in shorter cycle times. In Lean Enterprise, Jez Humble, Joanne Molesky, and Barry O’Reilly describe reducing batch sizes as “the most important factor in systemically increasing flow and reducing variability”.

Quantifying value and urgency means working with product stakeholders to estimate the Cost Of Delay of each feature. Cost Of Delay is the economic benefit a feature could generate over time, if it was available immediately. Considering how time will affect how a feature might increase revenue, protect revenue, reduce costs, and/or avoid costs is extremely powerful. It encourages product teams to reduce cycle time by shrinking batch sizes and eliminating Fuzzy Front End activities. It uncovers shared assumptions and enables better trade-off decisions. It creates a shared sense of urgency for product and IT teams to quickly deliver high value features. As Don Reinertsen says in the The Principles of Product Development Flow, “if you only measure one thing, measure the Cost Of Delay”.

For example, a manual customer registration task generates £100Kpa of revenue, and is performed by one £50Kpa employee. The economic benefit of automating that task could be calculated as £100Kpa of revenue protection and £50Kpa of cost reduction, so £5.4Kpw is lost while the feature does not exist. If there is a feature with a Cost Of Delay greater than £5.4Kpw, the manual task should remain.

Build The Right Thing case study – Maersk Line

Black Swan Farming – Maersk Line by Joshua Arnold and Özlem Yüce demonstrates how an organisation can successfully Build The Right Thing first, by understanding the flow of ideas, reducing batch sizes, and quantifying value and urgency. In 2010, Maersk Line IT was a £60M cost centre with 20 outsourced development teams. Between 2008 and 2010 the median cycle time in all value streams was 150 days. 62% of ~3000 requirements in progress were stuck in Fuzzy Front End analysis.

Arnold and Yüce were asked to deliver more value, flow, and quality for a global booking system with a median cycle time of 208 days, and quarterly production releases in IT. They mapped the value stream, shrank features down to the smallest unit of deliverable value, and introduced Cost Of Delay into product teams alongside other Lean practices.

After 9 months, improvements in Fuzzy Front End processes resulted in a 48% reduction in median cycle time to 108 days, an 88% reduction in defect count, and increased customer satisfaction. Furthermore, using Cost Of Delay uncovered 25% of requirements were a thousand times more valuable than the alternatives, which led to a per-feature return on investment six times higher than the Maersk Line IT average. By applying the same Lean principles behind Continuous Delivery to product development prior to additional IT investment, Arnold and Yüce achieved spectacular results.

Build The Right Thing and Build The Thing Right

If an organisation in peril tries to Build The Right Thing first, it risks searching for product/market fit without the benefits of fast customer feedback. If it tries to Build The Thing Right first, it risks spending time and money on Continuous Delivery without any tangible business benefits.

An organisation should instead aim to Build The Right Thing and Build The Thing Right from the outset. A co-evolution of product development and IT delivery capabilities is necessary, if an organisation is to achieve the necessary profitability to thrive in a competitive market.

This approach is validated by Dr. Nicole Forsgren et al in Accelerate. Whereas Avoiding The Alignment Trap In IT was a one-off assessment of business alignment in IT As A Cost Centre, Accelerate is a multi-year, scientifically rigorous study of thousands of organisations worldwide. Interestingly, Lean product development is modelled as understanding the flow of work, reducing batch sizes, incorporating customer feedback, and team empowerment. The data shows:

  • Continuous Delivery and Lean product development both predict superior software delivery performance and organisational culture
  • Software delivery performance and organisational culture both predict superior organisational performance in terms of profitability, productivity, and market share
  • Software delivery performance predicts Lean product development

On the reciprocal relationship between software delivery performance and Lean product development, Dr. Nicole Forsgren et al conclude “the virtuous cycle of increased delivery performance and Lean product management practices drives better outcomes for your organisation”.

Exapting product development and technology

An organisational ambition to Build The Right Thing and Build The Thing Right needs to start with the executive team. Executives need to recognise an inability to create new offerings or protect established products equates to mortal peril. They need to share a vision of success with the organisation that articulates the current crisis, describes a state of future prosperity, and injects urgency into day-to-day work.

The executive team should introduce the Improvement Kata into all levels of the organisation. The Improvement Kata encourages problem solving via experimentation, to proceed towards a goal in iterative, incremental steps amid ambiguities, uncertainties, and difficulties. It is the most effective way to manage a gradual co-emergence of Lean Product Development and Continuous Delivery.

Experimentation with organisational change should include a transition from IT As A Cost Centre to IT As A Business Differentiator. This means technology staff moving from the IT department to work in long-lived, outcome-oriented teams in one or more product departments, which are accounted for as profit centres and responsible for their own investment decisions. One way to do this is to create a Digital department of co-located product and technology staff, with shared incentives to create new product offerings. Handoffs and activities in value streams will be dramatically reduced, resulting in much faster cycle times and tighter customer feedback loops.

Instead of an annual budget and a set of fixed scope projects, there needs to be a rolling budget that funds a rolling plan linked to desired outcomes and strategic business objectives. The scope, resources, and deadlines of the rolling plan should be constantly refined by validated learnings from customers, as delivery teams run experiments to find problem/solution fit and problem/market for a particular business capability.

Those delivery teams should be cross-functional, with all the necessary personnel and skills to apply Design Thinking and Lean principles to problem solving. This should include understanding the flow of ideas, reducing batch sizes, and the quantifying value and urgency. As Lean Product Development and Continuous Delivery capabilities gradually emerge, it will become much easier to innovate with new product offerings and enhance established products.

It might take months or years of investment, experimentation, and disruption before an organisation has adopted Lean Product Development and Continuous Delivery across all its value streams. It is important to protect delivery expectations and staff welfare by making changes one value stream at a time, by looking for existing products or new product offerings stuck in Discontinuous Delivery.

Acknowledgements

Thanks to Emily Bache, Ozlem Yuce, and Thierry de Pauw for reviewing this article.

Further Reading

  1. Lean Software Development by Mary and Tom Poppendieck
  2. Designing Delivery by Jeff Sussna
  3. The Essential Drucker by Peter Drucker
  4. Measuring Continuous Delivery by Steve Smith
  5. The Cost Centre Trap by Mary Poppendieck
  6. Making Work Visible by Dominica DeGrandis

IT as a Business Differentiator

How can Continuous Delivery power innovation in an organisation?

When an organisation is in a state of Continuous Delivery, its technology strategy can be described as IT as a Business Differentiator. IT staff will work in one or more product departments, which are accounted for as profit centres in which profits are generated from incoming revenues and outgoing costs. A profit centre provides services to customers, and is responsible for its own investment decisions and profitability.

IT as a Business Differentiator promotes IT to be a front office function. There will be a rolling budget, and a rolling plan consisting of dynamic product areas with scope, resources, and deadlines constantly refined by feedback. Long-lived, outcome-oriented delivery teams will implement experiments to find product/market fit for a particular business capability.

This is in direct contrast to Nicholas Carr’s 2003 proclamation that IT Doesn’t Matter, to which history has not been kind. Carr failed to predict the rise of Agile Development, Lean Product Development, and in particular Cloud Computing, which has commoditised many lower-order technology functions. These advancements have contributed to the ongoing software revolution termed “Software Is Eating The World” by Marc Andreessen in 2011, which has caused a profound economic and technological shift in society.

Continuous Delivery as the norm

IT as a Business Differentiator is an Internet-inspired, 21st century technology strategy in which IT contributes to uncovering new revenue streams that increase overall profitability for an organisation. This means executives and managers are incentivised to maximise revenue generating activities, as well as controlling cost generating activities.

Continuous Delivery is table stakes for IT as a Business Differentiator, as IT executives and managers are accountable for delays between ideation and customer launch. There will be an ongoing investment in technology and organisational change, to ensure deployment throughput meets market demand. There will be a focus on optimising flow by eliminating handoffs, reducing work in progress, and removing wasteful activities. The reliability strategy will be to Optimise For Resilience, in order to minimise failure response time and blast radius.

IT as a Business Differentiator and Continuous Delivery were validated by Dr. Nicole Forsgren et al, in the 2018 book Accelerate. Surveys of 23,000 people working at 2,000 organisations worldwide revealed:

  • Continuous Delivery results in high performance IT
  • High performance IT leads to simultaneous improvements in the stability and throughput of IT delivery, without trade-offs
  • High performance IT means an organisation is twice as likely to exceed profitability, market share, and productivity goals
  • Continuous Delivery also results in less rework, an improved organisational culture, reduced team burnout, and increased job satisfaction

Leaving IT As A Cost Centre

If an organisation has institutionalised IT as a Cost Centre as its technology strategy, moving to IT as a Business Differentiator would be difficult. It would require an executive-level decision, in one of the following scenarios:

  • Competition – rival organisations are increasing their market share
  • Cognition – IT is recognised as the engine of future business growth
  • Catastrophe – a serious IT failure has an enormously negative financial impact

If the executive leadership of the organisation agree there is an existential crisis, they should publicly commit to IT as a Business Differentiator. That should include an ambitious vision of success that explains the current crisis, describes a state of future economic prosperity, and injects a sense of urgency into the day-to-day work of personnel.

There is no recipe for moving from IT as a Cost Centre to IT as a Business Differentiator. As a complex, adaptive system, an organisation will have a dispositional state of time-dependent possibilities, rather than linear cause and effect. A continuous improvement method such as the Improvement Kata should be used to experiment with different changes. Experiments could include:

  • co-locating IT delivery teams with their product stakeholders
  • removing cost accounting metrics from IT executive incentives
  • creating a Digital department of product and IT staff, as a profit centre

This leaves the open question of whether an IT department should adopt Continuous Delivery before, during, or after a move from IT as a Cost Centre to IT as a Business Differentiator.

Further Reading

  1. The Principles Of Product Development Flow by Don Reinertsen
  2. Measuring Continuous Delivery by the author
  3. Lean Enterprise by Jez Humble, Joanne Molesky, and Barry O’Reilly
  4. Utility vs. Strategic Dichotomy by Martin Fowler
  5. Products Not Projects by Sriram Narayan

Acknowledgements

Thanks to Thierry de Pauw for his feedback.

IT as a Cost Centre

Why does Continuous Delivery encounter resistance from IT executives and managers in so many organisations, and why is it so difficult to implement? Why does IT as a Cost Centre results in long-term Discontinuous Delivery?

Introduction

When an organisation is in a state of Discontinuous Delivery, its organisational model can usually be described as IT as a Cost Centre. There will be a functionally segregated IT department, that is accounted for as a cost centre in which costs can be incurred but profits cannot be generated. The IT department will provide services to product departments, which are accounted for as profit centres responsible for investment decisions and profitability.

IT as a Cost Centre relegates IT to a back office function. Each year, the IT department will be allocated a fixed budget to deliver a set of projects required by the product departments. Each project will represent a large batch of pre-agreed requirements. Short-lived project teams will try to deliver those requirements with scope, resources, and deadlines all fixed in advance.

This dovetails with the traditional view of IT as a universal commodity, popularised by Nicholas Carr in IT Doesn’t Matter in 2003. Carr argued IT was merely an infrastructural technology, and concluded it was merely a cost of doing business that could not provide a sustained competitive advantage.

Cost accounting suzerainty

IT as a Cost Centre is a pre-Internet organisational model from the mid-20th century, that still persists today. The world is now reliant on, and connected by technology, yet in a 2014 survey of 700+ organisations by CIO 48% of organisations still had IT as a cost centre.

In The Cost Centre Trap, Mary Poppendieck traces the popularity of IT cost centres back to the ubiquity of cost accounting. Poppendieck describes how the performance of an IT cost centre is measured solely in terms of cost management. This means accounting metrics percolate into the performance metrics of IT executives and managers, creating a culture of cost control with little regard for product development or organisational performance. These incentives will be markedly different to those of product executives and managers, and usually contribute to an adversarial relationship between product departments and IT.

In cost accounting, inventory is tracked as an asset, maximum resource utilisation is encouraged, and development work is capitalised until production launch. These factors create a hidden bias towards the institutionalisation of large projects, due to their perceived economies of scale. In reality, the project delivery model is an ineffective, inefficient vehicle that impedes value, quality, and flow.

Discontinuous Delivery as the norm

A Continuous Delivery programme is an unending journey of continuous improvement, that requires a substantial investment in order to achieve a time to market that can improve product/market fit and increase revenues.

This is likely to be a hard sell in an organisation with IT as a Cost Centre. IT executives and managers will be incentivised to reduce costs wherever possible, while delivering projects that are supposedly on time, on scope, and on budget. As a result, there will be resistance to the idea of spending money on an internal programme with an explicit goal of improving organisational performance and no fixed end date.

Delivery teams will be short-lived, which encourages people to prioritise short-term feature work over long-term architectural work, which restricts the deployability and testability of different services. The reliability strategy will be to Optimise For Robustness, which increases lead times and failure blast radius. Furthermore, the lack of a mandate beyond development work will make it difficult to work with operations teams to establish consistent toolchains for deployments, logging, monitoring, and alerting.

In short, Dr. Eli Goldratt was right when he said in The Goal “if it comes from cost accounting, it must be wrong”.

Further Reading

  1. The Principles Of Product Development Flow by Don Reinertsen
  2. Measuring Continuous Delivery by the author
  3. Lean Enterprise by Jez Humble, Joanne Molesky, and Barry O’Reilly
  4. Utility vs. Strategic Dichotomy by Martin Fowler
  5. No Projects by the author

Acknowledgements

Thanks to Thierry de Pauw for his feedback

Organisation antipattern: Project Teams

Projects kill teams and flow

Given the No Projects definition of a project as “a fixed amount of time and money assigned to deliver a large batch of value add“, it is not surprising that for many organisations a new project heralds the creation of a Project Team:

A project team is a temporary organisational unit responsible for the implementation and delivery of a project

When a new project is assigned a higher priority than business as usual and the Iron Triangle is in full effect, there can be intense pressure to deliver on time and on budget. As a result a Project Team appears to be an attractive option, as costs and progress can be monitored in isolation, and additional personnel can be diverted to the project when necessary. Unfortunately, in addition to managing the increased risk, variability, and overheads associated with a large batch of value-add, a Project Team is fatally compromised by its coupling to the project lifecycle.

The process of forming a team of complementary personnel that establish a shared culture and become highly productive is denied to Project Teams from start to finish. At the start of project implementation, the presence of a budget and a deadline means a Project Team is formed via:

  1. Cannibalisation – impairs productivity as entering team members incur a context switching overhead
  2. Recruitment – devalues cultural fit and required skills as hiring practices are compromised

Furthermore, at the end of project delivery the absence of a budget or a deadline means a Project Team is disbanded via:

  1. Cannibalisation – impairs productivity as exiting team members incur a context switching overhead
  2. Termination – devalues cultural fit and acquired skills as people are undervalued

This maximisation of resource efficiency clearly has a detrimental effect upon flow efficiency. Cannibalising a team member objectifies them as a fungible resource, and devalues their mastery of a particular domain. Project-driven recruitment of a team member ignores Johanna Rothman’s advice that “when you settle for second best, you often get third or fourth best” and “if a candidate’s cultural preferences do not match your organisation, that person will not fit“. Terminating a team member denigrates their accumulated domain knowledge and skills, and can significantly impact staff morale. Overall this strategy is predicated upon the notion that there will be no further business change, and as Allan Kelly warns that “the same people are unlikely to work together again“, it is an extremely dangerous assumption.

The inherent flaws in the Project Team model can be validated by an examination of any professional sports team that has enjoyed a period of sustained success. For example, when Sir Alex Ferguson was interviewed about his management style at Manchester United he described his initial desire to create a “continuity of supply to the first team… the players all grow up together, producing a bond“. This approach fostered a winning culture that valued long-term goals over short-term gains, and led to 20 years of unrivalled dominance. It is unlikely that Manchester United would have experienced the same amount of success had their focus been upon a particular season at the expense of others.

Therefore, the alternative to building a Project Team is to grow a Product Team:

A product team is a permanent organisational unit responsible for the continuous improvement of a product

Following Johanna’s advice to “keep teams of people together and flow the projects through cross-functional teams“, Product Teams are decoupled from project lifecycles and are empowered to pull in work as required. This enables a team to form a shared culture that reduces variability and improves stability, which as observed by Tobias Mayer “leads to enhanced focus and high performance“. Over a period of time a Product Team will master the relevant business and technical domains, which will fuel product innovation and produce a return on investment that rewards us for making the correct strategic decision of favouring products over projects.

No Projects

Projects kill flow and teams. Focus on products, not projects

Since the Dawn of Computer Time, enormous sums of money and embarrassing amounts of time have been squandered upon software projects that have delivered little or no return on investment, with projects floundering between segregated Business and IT divisions squabbling over overestimated value-add and underestimated delivery dates. Given Grant Rule’s assertion that “studies too numerous to mention show that software projects are challenged or fail“, why are software projects so prone to failure and why do they persist?

To answer these questions, we must understand what constitutes a software project and why its delivery model is incongruent with product development. If we start with the PRINCE 2 project definition of “a temporary organization that is needed to produce a unique and predefined outcome or result at a pre-specified time using predetermined resources“, we can offer a concise definition as follows:

A project is a fixed amount of time and money assigned to deliver value-add

The key characteristic of a software project appears to be its fixed end date, which as a delivery model has been repeatedly debunked by IT practitioners such as Allan Kelly denouncing “endless, pointless discussions about when it will be done… successful software doesn’t have a pre-specified end date” and Marc Lankhorst arguing that “over 80% of IT spending in large organisations is on maintenance“. However, the fixed end date of a software project is invariably a consequence of its requirement for a collection of value-adding features to be simultaneously delivered, suggesting an augmented definition of:

A project is a fixed amount of time and money assigned to deliver a large batch of value-add

Once we view software projects as large batches of value-add, we can apply The Principles Of Product Development Flow by Don Reinertsen and better understand why so many projects fail:

  1. Increased cycle time – a project might not be deliverable on a particular date unless either demand is throttled or capacity is increased, e.g. artifically reduce user demand or increase staffing levels
  2. Increased variability – a project might be delayed due to unpredictable blockages in the value stream, e.g. testing of features B and C blocked while testing of feature A takes longer than expected
  3. Increased feedback delays – a project might incur significant costs due to slow feedback on bad design decisions and/or defects increasing rework, e.g. failures in feature C not detected until features A and B have passed testing
  4. Increased risk – a project might have an increased probability and cost of failure due to increased requirements/technology change, increased variation, and increased feedback delays
  5. Increased overheads  – a project might endure development inefficiencies due to increased requirements/technology change, e.g. feature C development time increased by need to understand complexity of features A and B
  6. Increased inefficiencies – a project might encounter increased transaction costs due to increased requirements/technology change e.g. feature A slow to release as features B and C also required for release
  7. Increased irresponsibility – a project might suffer from diluted responsibilities, e.g. staff member has responsibility for delivery of feature A but is unincentivised to participate in delivery of features B or C

Don also provides a compelling explanation as to why the project delivery model remains prevalent, by explaining how large batches can become institutionalised as they “appear to have scale economies that increase efficiency [and] appear to reduce variability“. Software projects might indeed appear to be efficient due to perceived value stream inefficiencies and the counter-intuitiveness of batch size reduction, but from a product development standpoint it is an inefficient, ineffective delivery model that impedes value, quality, and flow.

There is a compelling alternative to the project delivery model – product development flow, in which we apply economic theory to Lean product development practices in order to flow product designs through our organisation. Product development flow emphasises the benefits of batch size reduction and encourages a one piece continuous flow delivery model, in order to reduce costs and improve return on investment.

Discarding the project delivery model in favour of product development flow requires an entirely different mindset, as epitomised by Grant urging us to “accommodate the ideas of flow production and lean systems thinking” and Allan affirming that “BAU isn’t a dirty word… enhancing products is Business As Usual, we should be proud of that“. On that basis the No Projects movement was conceived by Joshua Arnold to promote the valuation of products over projects, and anointed as:

Projects kill flow and teams. Focus on products, not projects

© 2024 Steve Smith

Theme by Anders NorénUp ↑