Scientific and technological revolutions have been catalyzed by breakthroughs in measurement. From Galileo’s telescope to tunneling electron microscopes, innovations in how we measure the world have been at the core of paradigm shifts. Measurements of the exchange of value and risk have produced the financial system that defines the world economy. With the underpinnings of our organizations and our economy shifting to intangible assets powered by software, how is that we still do not have an established way of measuring value and risk in software delivery? How can we expect large-scale digital transformations to succeed if we can’t easily measure the effects of the organizational and technological changes that we are making?
My personal quest for meaningful software delivery metrics started by accident. In 2003, I left the software industry to work on my PhD in Computer Science. I had no intention of becoming an academic. Instead, I was on a quest to find the next 10x productivity increase in programming. So I asked Gail Murphy, my supervisor, two questions: How could I go about discovering this next 10x improvement? And how could I get my Masters and PhD done in 3 years so that I could bring these ideas back to the industry?
Gail said that if I was serious about this, we would need to not only devise a set of experiments for evaluating a broad set of 10x ideas for boosting programmer productivity, but also a way of measuring the outcomes of those experiments. We eventually found that metric by measuring professional developer’s activity, and I was able to defend my PhD thesis with a statistically significant result for increasing developer productivity. The experience gave me an appreciation of the power of discovering a new way to measure. Since then, I have realized the bigger problem was to find a way of measuring the flow for not just a single developer’s work, but for an entire software delivery organization.
I started speaking to others on a similar quest. With the benefit of the data collected by the DORA State of DevOps surveys and summarized in Accelerate, a couple of years ago Nicole Forsgren and I were contemplating how to get the industry onto a better path of understanding how to measure software delivery. We documented our investigation in an article DevOps Metrics: Your biggest mistake might be collecting the wrong data. It was clear to us that organizations were misunderstanding the differences between collecting survey data and system/tool data. Then I realized something even more worrying: this lack of understanding led companies undergoing large Agile and DevOps transformations to put in place completely misleading metrics.
The metrics were mixing up different types of data, collecting the data in the wrong way, and falling back to the simple tool-based proxy metrics that result in local optimizations of the value stream. The false conclusions drawn from these faulty approaches were proving to be a massive problem, and I made it the mission of my book Project to Product, and the Flow Framework™ within it, to provide enterprise IT organizations with a new and customer-centric set of Flow Metrics connected to business results.
The challenge for enterprises is not that they aren’t aware of the need for meaningful metrics, but that they have neither the right measurement framework nor the technically feasible approach to measure the right things. For example, in the middle of an Agile and DevOps transformation journey, one of our larger customers realized they needed to measure more than just the data in Jenkins and Jira, as that was only a small slice of the time that work spent in their value streams. A seven figure data lake/warehousing project brought in data from the other four tools used across their value streams. Before the completion of that project, both methodology and toolchain changed slightly, rendering this entire warehousing initiative obsolete.
Because the data was not modeled and normalized at the time of its creation, the data warehouse turned into a garbage-in/garbage-out situation. The organization found itself where it had started, with in-tool proxy metrics. Once again they had no end-to-end view to tell them whether their new tool and process deployments were actually going to deliver customer value, or—worse—cause yet more bottlenecks and impediments. Ironically, this organization now has even less visibility into software delivery than they did when they were on the Rational suite from which they were migrating.
The benefit of industry frameworks is that they can be bigger than any single vendor. This reality is why I made the Flow Framework™ free for commercial use (Creative Commons BY-ND licensed). Doing so gave the framework the benefits of open source that I have grown to appreciate, making it easy for everyone to adopt the concepts. However, the state of Agile and DevOps toolchains in the enterprise has made it clear to me that for many organizations, providing the framework alone was not enough.
Most enterprise organizations have neither the capacity nor the time to build an end-to-end set of Flow Metrics dashboards and product value stream modeling tools from scratch, and make them work across a broad set of ever-changing tools and practices. And yet the cost of delay and risk of not having a meaningful and reliable way to measure a transformation can threaten an organization’s very survival. I’m a tool builder at heart, and realized that a new kind of tool-based solution was needed. Not one that would replace any of the existing Agile and DevOps tools, but one that would connect them to measure flow at a level much closer to the business.
Some of our customers that were struggling with this problem started to point out that Tasktop was actually sitting on the solution. Our integration technology, Tasktop Hub, already provides a full-fidelity way to connect to nearly all Agile, DevOps, Testing, Project and Requirements Management tools from ideation to operation. To do so at scale—for tens of thousands of IT staff across a large variety of tools, processes and practices— required Tasktop to design and develop Model-Based Integration, which provides a value-stream centric abstraction layer over the data and schemas in the tool repositories. With these building blocks, and a lean prescriptive framework for measurement, we could finally deliver on that most critical aspect of my vision for the industry: reliable and real-time metrics for the flow of value in software delivery. That is the story of Tasktop Viz, our new product aimed at providing business-level visibility into an organization’s software delivery value streams via the Flow Framework™.
Tasktop Viz provides organizations with the first ever business-centric way of measuring value in a software product portfolio. It does this by enabling organizations to easily model that portfolio around value streams that are mapped to existing tools and ways of working. From that data, Tasktop Viz makes it easy to immediately start tracking Flow Metrics and identifying bottlenecks across even the most complex value streams. Once the bottlenecks and impediments to flow are visible in those metrics, such as a disconnect between Dev and Ops, Tasktop Integration Hub can be used to automate the seamless flow of information across the tool silos present in any large IT organization. To business stakeholders, this means less Agile and DevOps tech-jargon, and more conversations around how software impacts revenue, cost of operation, customer retention, susceptibility to security breaches and other risks, and the happiness and productivity of the talent driving innovation.
Over the past months, I have been thrilled to see the impact that Viz is having on organizations who are using it to implement the Flow Framework™. It has been amazing to see organizations deploy this data-driven approach to continuous improvement, and to replace project plans with Flow Metrics dashboards. These significant customer results have been possible because we have been building and releasing Tasktop Viz in an appropriately Agile fashion. The Early Access program launched in April this year, followed in July by large-scale production deployments to select visionary customers to accelerate our feedback loop. And in true Tasktop fashion, a number of early adopters were Fortune 100 organizations. The next step in this journey is a limited release to a community near and dear to us: the DevOps Enterprise Summit Conference attendees. That will be followed by general availability in early 2020, at which point we will reveal much more about the features and overall solution.
The 308 customer value streams that customers connected with Tasktop Integration Hub was the empirical evidence that paved the path for the Project to Product book. But it was limited to understanding of how tools were connected. With Tasktop Viz, both we and our customers are getting an unprecedented level of insight and diagnostics into how large-scale product value streams operate and how to optimize their flow. I could not be more excited about the learnings that are to come, and want to thank everyone who has been with us on this great journey!
If you would like to learn more about Tasktop Viz or the Flow Framework™ Starter program, please don’t hesitate to get in touch. And if you’re at DevOps Enterprise Summit in Las Vegas today or tomorrow, visit the Tasktop stand (403) to see Viz in action.
Learn more about Tasktop Viz and Flow Metrics.