Throughout this series of articles I’ve explored how we need to bring the same rigour to architecting our software delivery value streams as what we’re witnessing in advanced manufacturing plants. Once we agree on what flows, we can analyze those flows to identify bottlenecks and opportunities to remove them. However, every time I’ve asked an executive-level IT leader where his or her bottleneck is, I’ve received either a blank stare or a vague answer, from otherwise extremely capable people.
To look for a bottleneck in a production system, we must first understand what flows through that system. We’ve seen many measures of software delivery flow proposed and analyzed, including lines of code (LOC), function points, work items, story points, deployments, and releases.1 Each captures a notion of value flow from a different perspective, but each has its limitations, especially when you consider the end-to-end flow of business value through a delivery pipeline. If my experience talking to IT leaders is a guide, from a business perspective, we simply don’t have enough consensus on this core question of what flows through a software value stream. Yet this should be the most fundamental question to answer if we are to apply lean principles to software delivery.
That lack of agreement means that the vast majority of enterprise IT organizations don’t have a well-defined productivity measure for what flows through their software production process. Contrast that with the automotive industry, where the number of cars produced is a clear measure of automotive value streams. Another measure is lifecycle profits, which Donald Reinertsen proposed in his seminal book The Principles of Product Development Flow.2 Reinertsen warned of proxy metrics for value and productivity. Measures such as LOC and the number of deployments per day fall into that category because they’re proxies for value delivered to the software consumer, not direct representations of that value. For example, a one-line code change could deliver as much value as a 1,000-line code change. But without a clear agreement on what’s flowing and what the units of production are, we’re far from delivering on anything like the lifecycle profits measurement that Reinertsen suggested.
Business leaders know productivity when they see it—for example, through products that drive market adoption or revenue results faster than others. But correlating development activities to those results has been more of an opaque art than a disciplined activity. To define productivity in a value stream, and where the bottleneck lies, we must first define what flows.
The Four Flow Items
To define the flow, we can go back to the first principles of lean thinking that drove improvement in mass production.3 Lean thinking first considers not what we produce, but what value the customer pulls. If we think back to the early days of software, with companies stamping out installation CDs in shrink-wrapped boxes, we can try to draw an analogy to car production and define that what software produces is boxes, and perhaps stretch that analogy to releases in the modern world of DevOps. But that analogy was weak then and is further rendered irrelevant in the age of cloud computing and software as a service, where releases are so frequent and automatic that they’re becoming invisible to the user. If customers aren’t pulling releases, what units of value are they pulling?
To pull value, customers must be able to see that value and be willing to exchange something for it. They might exchange money, or, in the case of a product with indirect and ad-based monetization such as a social media tool, they might exchange the time engaged with the product. Consider the last time you derived new value from a product or went back to using a product you hadn’t been using for a while. What triggered that exchange of value in terms of spending your time or money? Chances are it was a new feature that met your needs or delighted you in some way. Or, perhaps it was a fix of a defect that prevented you from using a product you had otherwise valued. And here lies the key to defining what flows through a software value stream. If what customers are pulling are new features and defect fixes, those must form part of that flow.
If we consider feature additions and defect fixes as the units of production—that is, the flow items—we can characterize work across all the people and teams in a value stream as applying to one of these units. Given full visibility into every process and tool in an organization, we could identify exactly how many designers, developers, managers, testers, and help desk professionals were involved in creating, deploying, and supporting a particular feature. The same goes for a defect fix. But is this the only work that’s being done in the value streams?
In an analysis of 308 toolchains, my colleagues and I identified two other kinds of work that are invisible to users and are pulled through the value stream by a different kind of stakeholder.4 First, there’s work on risks. This includes the security, regulatory, and compliance work that must be defined by business analysts, scheduled onto development back-logs, implemented, tested, deployed, and maintained. This work competes for priority against features and defects. It isn’t pulled by the customer because the customer usually can’t see it until it’s too late—for example, a security incident that leads to a number of security defects being fixed and security features being added. Instead, this work is pulled internally by the organization—for example, by the chief risk officer and his or her team.
The fourth type of work we observed is debt reduction. The concept of technical debt was introduced by Ward Cunningham5 and describes the need to perform work on the software and infrastructure code base that, if not done, will result in the reduced ability to modify or maintain that code in the future. For example, a focus on feature delivery can result in a large accumulation of technical debt. Scaling an operational environment without sufficient automation can result in infrastructure debt. If work isn’t done to reduce that debt, it could impede the future ability to deliver features—for example, by making the software architecture too tangled to innovate on. This work tends to be pulled by software architects.
Table 1 summarizes the four flow items.
In analyzing the 308 toolchains, we found a large variety of types of work items that were defined in an agile, an Application Lifecycle Management, or a DevOps tool. Each corresponded to work being delivered. Some organizations used detailed agile taxonomies. The Scaled Agile Framework (SAFe) offers one such taxonomy that provides fine-grained distinctions between the types of work flowing through a value stream.6 Other organizations used more ad hoc approaches, creating their own classifications of work items such as requirements and defects. In some cases, these approaches resulted in dozens of defect types.
No matter what the approach was, when we looked at it through the lens of customer pull, we could classify all types of work into the four flow items in Table 1. These flow items follow the MECE principle: they’re mutually exclusive and collectively exhaustive. In other words, all work that flows through a software value stream is characterized by one, and only one, of the flow items.
Other Views of Software Delivery
Other characterizations of software delivery work exist, such as Philippe Kruchten and his colleagues’ positive/negative versus visible/invisible quadrant7 (see Figure 1) and the characterizations described in The DevOps Handbook.8 Such characterizations can be useful for identifying types of development work. For example, the ITIL (Information Technology Infrastructure Library) process defines important differences between problems, incidents, and changes.
FIGURE 1. Philippe Kruchten and his colleagues’ depiction of tasks related to improving software.10
H1owever, these characterizations are a layer down from the flow items in that they’re more delivery specific and less customer and value stream specific. As such, we believe they’re more useful for characterizing the artifact types being worked on in the delivery of the flow items. For example, in SAFe terminology, the term for architectural work is enablers. This work can be done to support a new feature, fix a defect, reduce technical debt, or address a risk by providing the infrastructure needed to support compliance. We’ve observed such architecture work items flowing under several of the flow items I described.
Although that layer directly below the flow items is critical, from the customer and business stakeholder viewpoint, the delivery of the flow items is what determines whether something flowed through the value stream. How that was done, and whether it was done by adding new APIs or simply by creating additions to the UI, is just an implementation detail from this high-level viewpoint.
We’re continuing to analyze every toolchain we receive to determine whether other top-level types of work exist. But to date, all the work item types we’ve analyzed can be mapped to these four flow items. We believe they’re a useful abstraction for analyzing flow through software value streams and that studying software delivery through the lens that these flow items provide could yield interesting results. Because these flow items provide a different, more business- and customer-centric look at what flows through software value streams, we hope they result in further debate and discussion and help lead us to a productivity model that’s based on the value delivered to the customer rather than on proxies for the work that was done.
For more on how work flows through your software value stream, see the full learnings summarized in Project to Product by clicking on the front cover below:
Sign up to the Product To Project newsletter
This is the sixth blog in a series promoting the genesis of my book Project To Product. If you missed the first five blogs, click here. And to ensure you don’t miss any further blogs, you can receive future articles and other insights delivered directly to your inbox by signing up to the Project To Product newsletter.
- A.N. Meyer et al., “Software Developers’ Perceptions of Productivity,” Proc. 22nd ACM SIGSOFT Int’l Symp. Foundations of Software Eng. (FSE 14), 2014, pp. 19–29.
- D. Reinertsen, The Principles of Product Development Flow: Second Generation Lean Product Development, Celeritas, 2009.
- J.P. Womak and D.T. Jones, Lean Thinking: Banish Waste and Create Wealth in Your Corporation, 2nd ed., Free Press, 2003.
- M. Kersten, “Mining the Ground Truth of Enterprise Toolchains,” IEEE Software, vol. 35, no. 3, 2018, pp. 12–17.
- W. Cunningham, “The WyCash Portfolio Management System,” Proc. 1992 Conf. Object-Oriented Programming Systems, Languages, and Applications (OOPSLA 92), 1992; http://c2.com/doc/oopsla92.html.
- D. Leffingwell et al., SAFe 4.0 Reference Guide: Scaled Agile Framework for Lean Software and Systems Engineering, Addison-Wesley Professional, 2016.
- P. Kruchten, R.L. Nord, and I. Ozkaya, “Technical Debt: From Metaphor to Theory and Practice,” IEEE Software, vol. 29, no. 6, pp. 18–21.
- G. Kim et al., The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations, IT Revolution Press, 2016.
- K. Karu, ITIL and DevOps: Getting Started, white paper, Axelos, 2017; https://www.axelos.com/case-studies-and-white-papers/itil-and-devops-getting-started.
- P. Kruchten, “Agility and Architecture or: What Colour Is Your Backlog?,” presentation at Agile New England, 2011; https://pkruchten.files.wordpress.com/2012/07/kruchten-110707-what-colours-is-your-backlog-2up.pdf.
A version of this article was originally published in the July 2018 issue of IEEE Software: M. Kersten, “What flows through the software value stream,” IEEE Software, vol. 35, no. 4, pp. 8-11, ©2018 IEEE doi: 10.1109/MS.2018.2801538 – Original article