Why Value Stream Management for Software Delivery?
IT Is Shifting to a Business Value Focus
“Whenever there is a product for a customer, there is a value stream.” (Rother, M. & Shook, J. (1999). Learning to See. The Lean Institute, Introduction.)
Every company today is a software company, and the role of Chief Information Officer (CIO) has never been more important. But keeping the job has become much harder.
The business demands better software, delivered faster, with zero tolerance for outages or security breaches. A CIO’s performance is no longer measured by operational efficiency, rather by customer experiences, digital products and revenue generation.
Value Stream Management is a management technique or practice that focuses on increasing the flow of business value from customer request to customer delivery. Its systematic approach to measuring and improving flow helps organizations shorten time-to-market, increase throughput, improve product quality and optimize for business outcomes.
In this page we’ll focus on how IT organizations can actually “do” value stream management for software portfolios. We’ll talk about these topics:
- What is a value stream in software delivery?
- What flows through a software delivery value stream?
- How software delivery value streams are different from manufacturing value streams
- What to measure in software delivery value streams
- How to measure flow across heterogenous toolchains
- How, in practice, to improve flow through software delivery value streams
What is a value stream in software delivery?
The term ‘value stream’ was born of the Lean movement to describe the material and information flow to create value. A value stream is the sequence of activities an organization undertakes to deliver on a customer need.(Martin, K. & Osterling, M. (2014). Value Stream Mapping. McGraw-Hill, p. 2-3.)
Customers may be external (customer-facing value streams) or internal (value-enabling value streams).
Software delivery organizations have a value stream per product, application or service.
Value stream thinking puts the customer at the center, which helps transition IT organizations from an internal, project- and cost-centric focus to a product operating model. That’s why value streams are foundational in both the Project to Product movement and enterprise agility frameworks, like SAFe®.
Thinking in value streams helps zoom out of the details and take a macro look at business processes in order to identify strategic ways to improve them. Value stream thinkers ask: How can we provide greater and greater value to our customers—through innovation—while eliminating delays, improving quality and reducing cost, labor and employee frustration?
Macro business process of software delivery illustrated in a value stream map
What flows through a software delivery value stream?
Value is something for which customers are willing to exchange an economic unit (time or money).
The units of value that flow through through the software value stream are called "flow items". All the work, across all the people and teams within a value stream should be applied to the creation of these flow items.
The easiest example of a flow item is a feature that delivers a new product capability, as customers will clearly pay for it if they need it or are delighted by it. Fixes for defects that impaired product usage is another clear example of a flow item.
But is that all that flows through a product value stream? There are actually four flow items in software delivery value streams: Features, defects, risks and debt.
The goal of practicing value stream management in software delivery is to increase the rate the flow items make it through the value stream. That requires shortening the time it takes to complete flow items, start to finish. We’ll explain how to do that later on in this article.
How are software value streams different from assembly lines?
Lean and value stream thinking originated in manufacturing (Toyota car manufacturing, to be precise) but have become highly popularized in software delivery by the DevOps movement: With developers releasing code changes more frequently thanks to Agile, DevOps set its sights on getting those code changes running in production faster.
DevOps applies lean manufacturing principles to improve the deployment lead time, which “starts when an engineer checks a change in to version control and ends when that change is successfully running in production” (Kim, G., Humble, J., Debois, P., & Willis, J. (2016). The DevOps Handbook. IT Revolution, p. 8.)
DevOps practices teach organizations to automate the repeatable and mostly linear tasks from code commit to production, similar to a car assembly line.
Enterprises and agencies that have implemented DevOps have reaped dramatic benefits. They’ve gone from taking weeks or months to release a new version to deploying changes multiple times per day.
However, by focusing on deployment lead time instead of on end-to-end ‘time to value’, only part of the value stream is optimized, with the following consequence: The end-to-end lead time, from customer request (or market need) to production, remains unpredictable, unmeasured and dangerously long.
Even after Agile and DevOps, organizations find themselves unable to accelerate delivery fast enough to preempt or react rapidly to disruptions from startups, competition and market changes.
So, why not just extend the lean manufacturing practices upstream? Because they simply do not apply in the same way to product development.
Jim Womack, author of the seminal Lean book, The Machine that Changed the World, famously gave this advice to Harley Davidson: "Don't try to bring lean manufacturing upstream to product development. The application of Lean in product development and manufacturing are different."
Lean product development is a much less developed, researched and applied practice. It too aims to remove waste and reduce effort, but recognizes that the work during the design and implementation phase is very different from manufacturing:
At the core is the fact that you never produce the same work twice—each feature is different, presenting design, technical and economic choices at every step. Each output is unique, requiring the collaboration of a different set of practitioners to design, build and test it. And work goes through many iterations, which add value, not waste.
In contrast, the software work in the release pipeline strives to be predictable and mechanistic, with minimal output variation and no rework. (Kim, G., Humble, J., Debois, P., & Willis, J. (2016). The DevOps Handbook. IT Revolution, p. 9)
If organizations applied lean manufacturing principles in product development, it would stymy innovation and prevent them from creating the delightful and compelling experiences customers demand.
How then can a software delivery organization optimize the end-to-end value stream, if it’s made up of two very different value stream segments?
The Flow Framework® is a prescriptive approach to value stream management that recognizes this complexity and solves for it.
With the 2018 publication of his Amazon best-selling book Project to Product, Tasktop CEO and Founder Dr. Mik Kersten introduced the Flow Framework for IT leaders to guide and measure the journey to product and their organization’s ability to achieve innovation velocity. It enables organizations to do value stream management given the complex nature of software delivery knowledge work.
Measuring flow through software delivery value streams
“There is nothing so useless as doing efficiently that which should not be done at all.”
― Peter F. Drucker
It is nearly impossible to optimize a process when there is a big misunderstanding of its mechanics.
At the very macro level, the entire software delivery process may appear to be linear and therefore repeatable (to the point that there have been previous efforts to create “software factories”).
But in fact, one level deeper, the iterative creative process of software design reveals itself to be a complex flow network of planning, design and engineering communication. Work moves back and forth between contributors as it progresses through each phase, morphing, changing, and converging in a highly creative process.
For example, during the release process additional testing is executed (security, static/dynamic analysis), and if issues are found during that stage, work will go back to Engineering. In another common example, while breaking down the feature in the implementation process, a glaring omission in user experience is found, and work will go back to feature design.
This network of activity takes place in many best-of-breed and specialized tools that have grown through the Agile and DevOps movements. Yet IT leadership understandably struggles to make sense of all that complicated activity—to see it clearly and to extract insight from it.
If the underlying infrastructure of developing and delivering flow items through the value stream is so complex, how can you truly measure how fast you’re capable of delivering critical business capabilities? How will you know where flow is slowing down so you can fix it?
The answer is Flow Metrics.
- Flow Metrics hone in on business value, namely how much business value you are delivering today and where you can invest your dollars and talent to deliver more value faster tomorrow.
- Flow Metrics are constructed from the combined work of all the contributing practitioners across the value stream.
- Flow Metrics create a clear set of end-to-end value stream metrics that can be shared by both IT and business leaders.
- Flow Metrics abstract away details like team structure, technical architecture, and tool implementations.
Measuring flow across heterogenous toolchains
If you’re after end-to-end Flow Metrics, there is no choice but to mine the ground truth from the enterprise tool networks where the work is done and knit the data together.
In practical terms, getting actionable real-time visibility into end-to-end flow requires three things:
- The ability to capture data from any tool without interruption—isolated from third party changes and tool upgrades—and without destabilizing the tools’ daily operation.
- The ability to join and abstract the data from the individual tools into one integrated set of Flow Metrics.
- The ability to view the data sliced-and-diced through the business lens, so that software delivery performance can be correlated to business results.
Once armed with visibility into the flow of your product value streams, you can get to problem-solving. You can identify where flow slows down—where a constraint is impeding business value delivery—and start doing what’s needed to relieve the bottleneck. Then, you move on to the next bottleneck, and so on and so forth.
The trick is to get to Flow Metrics fast and in real time, because you don’t have two years to wait. One leading U.S. insurance company spent a year and $1million to build their own Flow Time metric, only to find the investment rendered obsolete the moment they even slightly modified their toolchain.
Homegrown Flow Metrics can take years to produce, for the sheer effort of culling, normalizing, selecting and visualizing the right data. Add to that the cost of data replication and storage, the expertise one must develop on each tool’s APIs and data schemas, the care you must take to prevent bad queries from impacting tool performance, and the break/fix work following every tool upgrade.
Turnkey, purpose-built value stream management tools can provide Flow Metrics within days. They visualize current and historical flow for a product in terms of business value creation and protection, and they correlate Flow Metrics with business outcomes.
Improving flow across value stream networks
The three tenets of improving flow in software delivery value streams are Connect, Visualize, and Measure.
Value stream management aims to remove waste from value streams by identifying both necessary and unnecessary non-value-adding work.
- If the work doesn’t add value and is also unnecessary, it clearly should be cut.
- If it’s non-value-adding but necessary, it should be automated.
Connecting your value stream networks and automating the flow of work across the value stream is one of the lowest hanging fruits to removing waste.
Based on an analysis of 308 value stream networks of the Global 2000, as well as hundreds of conversations with their IT leaders, Tasktop has identified three inhibitors to flow that can be easily alleviated by automating workflows and traceability across tool boundaries.
The three inhibitors to flow are:
- Lost productivity: Precious time is spent on non-value adding work, like duplicate data entry between tools, manual handovers, status meetings, and endless emails back and forth with zero traceability. According to estimates, practitioners waste anywhere between 20 minutes to 2 hours a day on these inefficient processes.
- Burdensome traceability: Many of the world’s leading companies operate in highly regulated markets like financial services, automotive, healthcare, pharmaceuticals, and government. To remain in compliance, they must produce reports that trace every original requirement to its implementation (code, test, build). More often than not, they do so manually—usually using spreadsheets—in a process both expensive and error-prone.
- Frequent disruption by M&A, reorgs, and the introduction of new tools, processes or technologies: Upheavals like mergers, acquisitions, divestitures, and reorganizations can be very damaging to flow. Enterprises often find themselves incapable of rapidly absorbing additions to their technical ecosystem. Many transformations have been thrown off track by such changes, which disrupt any positive flow momentum the organization had going, making it challenging to meet business results.
According to the Flow Framework, value stream networks require an information backbone that connects the tools and orchestrates near real-time data synchronization between them. This backbone, referred to as the Integration Model, defines the routes that business value can flow through the value stream network.
The Integration Model connects the tools and routes the work as it progresses from team to team, discipline to discipline, specialized tool to specialized tool. All the while, it normalizes, relates and synchronizes the individual work items (‘artifacts’) across tool boundaries, eliminating silos and information bottlenecks and all the non-value adding manual work.
Furthermore, the Integration Model also provides the tool network with the elasticity to expand and contract, to evolve and change, to absorb newly acquired networks, experiment with the latest tools, and gradually wean off old ones.
“Given the maturity of the discipline of information visualization, we should be able to visualize every aspect of each activity that happens within a software value stream.” (Kersten (2018). Project to Product. IT Revolution, p. 155)
Every CIO and VP of Software Delivery hears their department heads and managers saying they simply have too much work and too little resources. And at the same time, the pressure from the business to deliver faster is relentless.
Visualization is the most powerful way to make those conversations and needs concrete. If you can present live value streams in a visual form, teams can rally around the dashboards and problem-solve together.
The data in tool repositories is the ground truth of software delivery, with each work item reporting its location and status in real time. Similar to live traffic apps like Google Maps and Waze, you should be able to visualize and analyze the flow of value as it traverses the value stream network of tools and practitioners.
Value stream management solutions are capable of collecting and compiling all those data points and drawing a map of how value flows through your teams. They can identify where work is slowing down and piling up, so you can find the fastest routes to production.
The ideal value stream visualization tool gathers the individual work item statuses in real-time and reframes them in business terms, so people can see how business value is flowing—not just work. In addition, they communicate the overall picture of how value flows from inception till its final destination, helping organizations make adjustments to get there fastest by eliminating roadblocks, circumventing bottlenecks and rerouting as necessary.
According to the Flow Framework, software delivery value streams must be visualized from the business’s perspective. Otherwise, IT and the business will never bridge the crippling divide between them and will remain on opposing sides of a fence.
The Product Model is crucial here. From each relevant tool, the Product Model carves out the subset of data pertaining to a specific product value stream and includes only that data in the value stream’s Flow Metrics. Meaning, it overlays the business lens on top of whatever technical, architectural or tool-related reality exists.
Traditional businesses have been implementing digital transformations for nearly two decades, in an effort to achieve parity with digital natives. But they’ve only partially delivered; at the current rate of disruption, half of S&P 500 companies will be replaced in the next ten years.
Naturally, with the stakes so high, there is zero tolerance for another failed transformation.
Software delivery organizations in any traditional business or government agency must become as productive as software startups and digital natives, if they are going to survive the next few years.
With literally hundreds of available IT metrics, selecting the ones that truly measure the impact of software delivery on the business can be hard. Organizations have commonly pinned their transformations on proxy metrics, i.e. measuring process, activity, and operational efficiencies that are indicative of siloed, local optimizations.
Unfortunately, proxy metrics are misleading, as they do not tie directly to business outcomes and cannot be relied upon to present an accurate picture. An IT team can be deploying a hundred times per day, but if their work is not connected to the needs of the business, the results will not materialize for the business either.
Measuring flow is the way to go. Consider Flow Metrics as the vital signs for software delivery. They are the leading indicators. If you can improve your flow, you are on the right track to meeting desired objectives. They provide a clear indication of whether your flow is healthy, trending positively, and can support the business results you’ve targeted.
A drill down into Flow Metrics reveals where work is slowing down, where the system bottleneck lies—where corrective action can remove the constraint and unblock flow for the entire value stream.
As with the human body, diagnosing the ailment and healing a specific function that is underperforming will require detailed metrics on that specific discipline. Discipline-specific metrics can often be provided by the tool used to perform that function.
Proxy metrics that measure a specific silo are only meaningful if the silo itself is the bottleneck. But one should not confuse those metrics as an indication of the value stream’s overall health. For example, Jeff Bezos urged his shareholders to resist proxies for decision making. Instead, you must find the metrics that directly correspond to business outcomes.