Subscribe to Blog feed Blog
Connecting the world of software delivery.
Updated: 1 hour 42 min ago

What is Value Stream Management in software development and delivery?

Thu, 04/19/2018 - 14:48

“If you can’t describe what you are doing as a value stream, you don’t know what you’re doing”Karen Martin and Mike Osterling, Value Stream Mapping

You’ve heard us talk a lot recently about value streams in software delivery. How leading IT organizations are no longer concentrating on how fast they deliver software, but how much business value they can deliver at speed. And how every customer product, service or application has its own value stream.

Software delivery as a value stream

This significant shift in mindset means CIOs are looking more closely at how value flows across the software delivery process. They’re looking for a holistic way to connect and measure all end-to-end activities undertaken for a specific product or service in order to provide great customer experiences.

CIOs under siege

With CIOs under pressure from the business to create greater value for customers through innovation – while eliminating delays, improving quality, and reducing cost, labor and employee frustration – CIOs are focusing on how they can obtain end-to-end visibility into how this value is created so they can measure and optimize this flow. No more small order.

So it’s understandable that CIOs are feeling the heat, as well as feeling frustrated too. They’ve followed the software delivery playbook to a T; they’ve been to the tech conferences, invested millions in Agile and DevOps, brought in the specialist tooling and people – why are they still not seeing the results they were promised?

The limitation of Agile and DevOps

One main reasons that IT transformations are failing is that Agile and DevOps initiatives struggle to scale. In terms of optimizing the “build and deploy” stage, Agile and DevOps have been a success, enabling priority features to be built and released into production faster than ever. In that sense, enterprises and agencies are reaping dramatic benefits; they’ve gone from taking weeks or months to release a new version to deploying changes multiple times per day.

However, because the methodologies only focus on certain areas of the value stream, the productivity benefits are confined to those stages, as seen in the diagram below:

The productivity benefits of Agile and DevOps do not impact total end-to-end Time to value (TtV) without automatically flowing and tracing work from ideation to continuous operation.

What about everything that happens “ideate” and “operate” stages before and after a product has been built and deployed? Any benefits of Agile and DevOps hit a wall, and the full end-to-end process is not fully optimized. And when the volume and velocity of products requests inevitably begin to ramp up, so too do the issues and waste that undermine time to value.

Download this white paper to better understand why your Agile and DevOps transformations are failing at scale

When trying to trace the flow of work across the end-to-end process, CIOs are quickly identifying even more issues – chiefly, they can’t actually see how work (business value) flows across key stages in their value stream because of the disconnect between the tools and teams that plan, build and deliver software. They have no end-to-end visibility, traceability or governance over the process.

The perils of a disconnected value stream

With a disconnected value stream, how do you confirm how many features you delivered last year? Or how much of your resource capacity is going towards new business value vs. technical debt and quality issues? These are the kind of questions that a CEO will have, and if they don’t get the answers they need, a CIO’s days may very well be numbered.

Crucially, none of this vital information is available in one tool, rather it is stored in pieces across the value stream in multiple systems. These pieces need to be put together to create a single view and one source of truth of product’s development. This traceability requirement is particularly crucial for heavily regulated industries such as government, finance and healthcare, but also pertinent to commercial businesses aiming to deliver a high quality product.

Furthermore, without this end-to-end view, it’s almost impossible to measure the all-important flow time – the key measure of delivery speed. Flow time represents the time it takes to deliver a new feature or product from the first customer request through to completion (i.e., delivering value to the end user).

Abandoning a “hope and see” mentality, forward-thinking CIOs are taking direct action to connect the process and make it visible through Value Stream Management.

Value Stream Management: The next major milestone in software delivery

 

Tasktop enables Value Stream Management to help you connect, visualize, measure and optimize your software delivery process to drive business value.

A Value Stream Management solution connects the network of best-of-breed tools and teams for planning, building and delivering software at an enterprise-level.

By doing this, CIOs can automate the flow of product-critical information (e.g. artifacts such as Features, Epics, Stories, Defects etc.) and other associated data (comments, attachments etc.) across the value stream. This capability provides all stakeholders with a comprehensive and accurate view of the process from start to finish.

End-to-end automation addresses two core challenges in managing value streams:

  1. Software delivery work is invisible knowledge work. There are no physical materials to observe as they move through the value stream. It’s hard to comprehend something you cannot see, and even harder to manage it.
  2. Unless fully automated, transitions between work centers are informal and untraceable. Handoffs take place over email, phone, chat, in spreadsheets or face-to-face meetings. The value stream therefore exists, but only implicitly. It is not tangible – and therefore incomprehensible.
Tasktop – the only Value Stream Management solution on the market

Tasktop is helping leading organizations and agencies – including nearly half of the Fortune 100 – across all major industries to connect, visualize and measure their software delivery at scale to deliver real tangible business value.

Not sure where to begin? Don’t worry – connecting an entire value stream can at first seem like an overwhelming task. But most leading organizations are connecting their value streams in an incremental way.

Most organizations start by implementing one or two ‘integration patterns’ that allow them to connect part of their value stream. Over time they add more and more, with the ultimate goal of a fully integrated value stream.

You can learn more about integration patterns in our blog Common integration patterns in enterprise software delivery.

Speak to us today about a free one-hour consultation with one of value stream experts to begin visualizing the value streams that exist your business.

The post What is Value Stream Management in software development and delivery? appeared first on Tasktop Blog.

Like Waze for your software development and delivery?

Tue, 04/17/2018 - 13:08

Of all of the things I’ve done while at Tasktop, working on webinars has to be my favorite. Every month, I get the chance to work with intelligent people with unique perspectives on software development and delivery to educate and inspire viewers to improve the way their organizations deliver value to customers.

We like to pick topics where we can provide some practical advice and clear up the confusion that often arises when lots of different people start using the same term to mean different things. That’s why our next couple of webinars are going to focus on value streams.

We’ve invited Forrester Senior Analyst, Christopher Condo, to be our guest speaker to help us define Value Stream Management and answer some of the common questions, including:

  • What is Value Stream Management and why do I need it?
  • We’re already doing Agile and DevOps. Do we need value streams too?
  • What does it actually take to be able to manage your value stream?

Forrester has been doing some exciting research on how Value Stream Management can tie Agile and DevOps teams to the business and provide end-to-end visibility. We’ll discuss how value stream management allows you to measure and identify ways to further optimize your software delivery.

Oh, and the best thing about webinars? They provide me with an opportunity to try out analogies to make the concepts more fun and relatable. Sometimes, I may go too far.

So, if you’d like to hear how Value Stream Management is like Waze for your software development and delivery, join us next Thursday for our webinar featuring Forrester.

The post Like Waze for your software development and delivery? appeared first on Tasktop Blog.

The 2018 Text Editor and IDE Playoffs

Thu, 04/12/2018 - 07:35

It’s playoff season in the world of sports. The NHL kicked off the journey to the Stanley Cup last night and the NBA had an exciting finish to the regular season cementing the matches set to start this Saturday. Not to mention how epic this year’s March Madness was.

There are many tools we use everyday in software development. With all this heated excitement, drama and glory we were inspired to conduct a little playoff bracket of our own for this episode of Tasktalks. Kevin and I are joined by Jaxsun McCarthy Huggan, another one of our software developers, to matchup 16 text editors and IDEs against one another. We picked everything from the best of breed, the tried and true, to the ridiculous but fun.

The bracket has some clear favourites and some underdogs however in typical sports fashion, there was some definite drama in many of the matchups. We were honestly surprised by some of the upsets and our hearts were won over by a couple Cinderella stories. That said, at the end of the day, we went through 3 rounds of grueling debates to crown the top product. At least for one episode that is.

Format:
  • 16 products in the text editor/IDE space
  • Matchups chosen arbitrarily
  • Each of us will have a chance to provide thoughts arguing our case as to which product moves on
  • Product with the majority of votes in each matchup moves on

The post The 2018 Text Editor and IDE Playoffs appeared first on Tasktop Blog.

How do I measure the productivity of my software development team?

Tue, 04/10/2018 - 10:11

The eternal question for organizations worldwide – how do you measure the productivity of your software development team?

There have been many attempts to answer this question, yet a solid measure continues to elude the industry.

For instance, counting output such as the number of lines of code produced is insufficient as there’s little point in counting lines that may be defective.

Quantifying input isn’t easy, either – do you count the number of individuals? The number of hours spent coding? The total hours spent working? What exactly is productivity in software development?

First, we need to establish how developers themselves perceive productivity. If we can determine what factors lead to perceptions of productivity, we can then look to recreate those factors and help developers feel more productive more often. And if a developer feels more productive, they’re more than likely to deliver better work faster.

To better understand how developers perceive productivity, researchers observed professional software developers from international development companies of varying sizes for four hours each. The findings – revealed in the white paper Understanding software development productivity from the ground up identify the key factors that make developers feel productive, and provide compelling insight into how to eliminate the activities/tasks that drain developer productivity.

Speak to us today to learn more about how you can improve both the productivity of your development teams and the productivity of all other specialist teams that help you to plan, build, test and deliver software at scale. By focusing on end-to-end productivity, you can optimize your time to value to accelerate the speed and quality of your software products.

The post How do I measure the productivity of my software development team? appeared first on Tasktop Blog.

This blog isn’t “technically done”

Wed, 04/04/2018 - 14:38

There are likely as many definitions of “Done” as there are companies out there. You have to find the one that fits. And then stick with it.

This blog post isn’t “technically done”. It is done.

It’s not “done except for the review”.

It’s not “done, but I need to update a couple things”.

It’s not “done, but needs approval”. It is done.

You know how you know it’s done? Because you’re reading it. Because it’s been published and I’ve moved on to other work. Because my “write and post a blog” task has been marked off.

This post isn’t still being thought about, worked on, or anything else. It’s complete. Because for better or worse, I finished it, wrapped it up, put a bow on it and clicked “publish”.

Sure, I may need to go back at some later date and revise it. But that would be “new” work. This work, the work of writing and posting this blog, is unequivocally “Done”.

When it’s so easy to say when a blog post is done, why is our industry so wishy-washy when we talk about features, stories, and defects? You’d be amazed at the different ways people say a feature is done.

“It’s technically done, there’s just another review.”

“It’s done, but hasn’t been merged to the master branch.”

“It’s done, but hasn’t been merged to all the branches.”

“It’s done, but doesn’t have Product Owner sign-off.”

None of those are “Done”. They’re in different states of “In Progress”. Some are closer to “Done” than others. That’s fine. That’s why we often have fairly granular statuses. We should be adamant about calling it like it is.

So how do we decide when it’s “Done”? There’s a decent amount to unpack in that sentence. First, you have to define the “it”, and then you have to define “Done”.

Let’s start with the “it”. In simple terms, the “it” is whatever thing you’re working on, such as a feature, a story, a defect, tech debt etc. We usually track these in a tool such as Jira, Rally, Targetprocess, etc. The “it” is simply a work item in your tool. Really, the work item is just a representation of the real thing (the software product), but we need to get concrete here. So for the sake of this post, let’s call the “it” the artifact that exists in your tool.

Typically, the artifact will exist in a number of different states such as “New”; “In Progress”; “Done”. That’s an incredibly simple one (at Tasktop, however, we have upwards of eight states as we want to be very specific in our Feature Delivery process. That’s us though. You may be different!).

Each of these states needs to have really specific definitions around what they mean. This is where the “Definition of Done” comes into play.

You have to decide that for your organization. Write it down. Make everyone read and agree to it. You don’t move your story to “Done”, until it meets your definition. Your feature isn’t “Done” until, well, it’s “Done”.

Whether that means only the code has been committed, or whether that means a successful software build has been produced that includes the feature, or whether that means the Product Owner has reviewed and approved the functionality, it’s not “Done” until it’s “Done” by your definition.

You may even go so far as to decide that it’s not “Done” until the user docs have been updated. Or it may not be “Done” until Marketing has made a video for the feature. Or maybe it’s not “Done” until it’s been released to the public.

The point is that there are likely as many definitions of “Done” as there are companies out there. You have to find the one that fits. And then stick with it.

Now for the advanced stuff. Sometimes the “it” we were talking about exists in many tools at once. Or it appears in different forms in different tools. These multiple fragments of an artifact make traceability and product validation extremely challenging, as my colleague Brian Ashcraft expands on in his recent article in Dzone.

Here at Tasktop, our Field sends in feature requests via one tool. These requests are then flowed to the Product team’s tool to be turned into features, which are in turn sent to the Engineering team in yet  another tool.

We automate this flow of work from tool to tool across the software delivery value stream, but I know there are some of you out there that do the exact same thing, but do it manually.

We’ve now entered the world of value streams.

Value streams are the manifestation of how information and work flows through your organization between teams and tools. It broadens the definition of the “it” and “Done” that we spoke of earlier.

Value streams are a bit outside the scope of this post. This post is about knowing precisely when you as an organization know when a work item is “Done”. It’s about being able to report “I am done with this and I’m ready to move to that.”

Now that I’ve written the post, had it reviewed and posted it, I can say with certainty…

I’m “Done” – can you?

Want to automatically flow information across your value stream? Request a dynamic personalized demo today.

The post This blog isn’t “technically done” appeared first on Tasktop Blog.

How do I find and eliminate DevOps bottlenecks?

Tue, 04/03/2018 - 09:39

DevOps, like Agile, has transformed enterprise software delivery. Thanks to sprints, prioritization, CI/CD and release automation, organizations are building and deploying software products faster than ever. That pesky bottleneck between code commit and deploy has been all but eliminated, which should ensure better time to value for customers.

Yet if your flow time – i.e. end-to-end lead time – is still too long, unpredictable and unmeasurable, it’s likely you’ve only shifted the bottleneck further upstream. Sure, automation has sped up handoffs and communication between developers and operations, but what about everything else that happens in the process?

What about all the other manual processes that take place before and after a piece of code is written? If there are still manual handoffs at key stages of the process, then your overall workflow is still being impeded by bottlenecks outside of the DevOps stage.

Download this short e-book to learn how to target DevOps bottlenecks via connected lifecycle data

As Dominica DeGrandis, our Director of Digital Transformation, explains in her latest article for TechBeacon, you can only identify and remove these bottlenecks if you can see them. A LOT happens before ‘Dev’ and after ‘Ops’. A lot of creative thinking and activity ensures the right product is built, maintained and delivering value to the end user. And unless you can trace and automate the flow of work from ideation to production, you won’t be able to optimize the process. You need to collect and consolidate all data that pertains to planning, building and delivery of the product.

So how do you avoid bottlenecks and accelerate your DevOps (and other IT) transformations?  First, you need to ask some important questions:

  • Get the right metrics – are you measuring right thing?
  • Do you understand how value flows across the process?
  • Can you easily obtain real-time metrics across the process?
  • Are you able to produce accurate traceability and other performance reports?

If the answer is “no” or you’re not sure, then it’s likely your software delivery value stream is still a mysterious black box of activity, and not optimized as a result. With no visibility into the end-to-end process, how do you know where to look for bottlenecks? How do you where the opportunities are to create more value?

The good news is that you can “reveal” and optimize the software delivery process by connecting and automating the flow of work between teams and tools via value stream integration. 

For a deeper look into how to find and remove bottlenecks, check out Dominica’s piece Break through those DevOps bottlenecks.

Download this short e-book to learn why your Agile and DevOps initiatives are struggling to scale

For a more dynamic discussion, request a personalized demo of your software delivery value stream. We can help you connect your value stream network, spot bottlenecks, and dramatically improve how fast and well you deliver innovative software products.

The post How do I find and eliminate DevOps bottlenecks? appeared first on Tasktop Blog.

What is Traceability in Enterprise Software Delivery?

Thu, 03/29/2018 - 08:56

When people talk about traceability in enterprise software development and delivery, they’re generally referring to “requirements traceability”. Requirements traceability is about tracing the lifecycle of a requirement (i.e., what the end user expects from a new or modified product) and its related design elements (tests, commits, builds, etc.) as it moves downstream towards deployment following a customer request. It’s about ensuring the right thing is built the right way, and laying breadcrumbs so that the process can be accurately analyzed, improved and optimized. It’s about product validation.

How does traceability help IT organizations?

Traceability underpins three critical business management processes:

  • Quality Management: enabling organizations to hit quality targets/meet customer expectations
  • Change Management: tracking changes to product during development (before, during and after)
  • Risk Management: tracking and verifying vulnerabilities to product integrity

What type of organization needs traceability?

For organizations in heavily-regulated industries such as finance, healthcare, insurance, and federal government, traceability is critical. All work simply must adhere to strict regulation and policy. Compliancy is everything. Requirements must meet comprehensive testing because outages and breaches carry serious implications (the inability to find a patient’s medical file, for instance, can be fatal).

And even for companies outside those sectors, if a critical test is missed, there is a serious danger that they risk deploying a wrong or malfunctioning product that undermines a quality delivery. Customers are unsatisfied, employees frustrated, and a company’s reputation takes a hit.

How are organizations approaching software delivery?

Traditionally, organizations could use Requirements Management tool (such as Doors and Jama) and Requirement Traceability Matrix (RTM) to track changes to a requirement during production, ideation to completion:

This approach, however, is inherently flawed: a spreadsheet or even a requirements traceability matrix (RTM) lacks the deep sophistication required to document and link all these moving parts from end-to-end. Such an approach is too narrow and static to capture the sheer volume and velocity of work – especially when you consider the spiraling network of specialists, tools, and artifacts involved in creating a single iteration.

It’s important to remember there isn’t just one version a requirement. Multiple tools for planning, building and delivering software all store their own fragment of each requirement so that the relevant specialist (developer, tester etc.) can work on that requirement in their own tool. Yet there’s no simple or practical way to cross-reference and verify this work.

Using a spreadsheet, therefore, provides limited traceability: it’s simply not dynamic or fast enough to keep track of all the fragments of requirements, the hierarchical relationships, dependencies, and other linked artifacts (such as test cases) that travel through multiple systems. It doesn’t trace the entire lifecycle of all elements in a product iteration, including all associated changes that validate that the right product was deployed.

What’s the solution?

Any traceability solution must link all teams and tools to provide a single source of truth and absolute transparency between systems. For that, there needs to be data integration across tools from both up- and downstream – from the teams that plan, design and create software, to the teams that build, deploy and maintain the software. That means automating the flow of all requirements and associated artifacts (and any modifications and changes) across all tools in real-time. Not only does this data improve specialists productivity and the quality of their work, but management and CIOs can create accurate reports and dashboards to monitor and optimize performance.

The post What is Traceability in Enterprise Software Delivery? appeared first on Tasktop Blog.

Value Streams in Enterprise Software Delivery

Tue, 03/27/2018 - 14:20

First it was “Waterfall”. Then it was “Agile” and “DevOps”. Now the concept du jour in the enterprise software delivery space is “value stream” – and for good reason.

Many enterprises in the Fortune 100 are no longer focusing purely on how to build software, but why they’re building software. A seismic shift is taking place – it’s not just about how fast you can deliver, but how much value you can deliver at speed. So by focusing on the why, you can begin optimizing the how.

But what is a value stream and how does it apply to software delivery? The concept was coined by Toyota, the Japanese car manufacturer, who developed the Value Stream Mapping concept to introduce lean principles to its assembly line to reduce end-to-end lead time.

A value stream is a sequence of activities that an organization undertakes to deliver on a customer request” – Value Stream Mapping: How to visualize work and align leadership for organizational transformation

But as Tasktop’s leader of Digital Transformation – Dominica DeGrandis – highlights in her new article Rowing in the same direction: use value streams to align work, your organization needs to understand its own definition of value first. Is it investment ROI? Shareholder profits? Customer experience? Adhering to company vision and ethos? Securing venture capital dollars?

To that end, we propose a new definition:

“A value stream encompasses all activities undertaken from beginning to end for a specific product or service in order to provide business value.”

Check out Dominica’s piece for AgileConnection for a better understanding of what value streams are, why they matter, how to define your organization’s value, and how best to exploit them.

Only once this value has been defined, can an organization begin to provide visibility into how business value is created and how they can optimize this flow across the business – including their software delivery teams. Value stream integration is critical for flowing business value from ideation to production, and a core component for scaling Agile and DevOps transformations. 

Download this e-book to overcome your enterprise-scale Agile and DevOps issues

Want to know more? Contact us today for a deeper overview of how the world’s most successful and impactful organizations are integrating their value streams to optimize their software delivery and digital transformations.

 

The post Value Streams in Enterprise Software Delivery appeared first on Tasktop Blog.

Salesforce acquires Mulesoft to put integration firmly on the map

Thu, 03/22/2018 - 08:00

At Tasktop, we’re excited by the news that Salesforce has acquired Mulesoft, in a deal that is estimated to be worth upto $6.5 billion. The acquisition is another significant development in driving IT efficiency across enterprises, further emphasizing the importance of data integration between systems.

Salesforce is a leading CRM solution that enables organizations to capture nurture leads, track accounts and customer requests, identifying business opportunities and issues, and more. The system serves as the prime link between business and customer.

Mulesoft, meanwhile, is a leading solution for building application networks that connect enterprise apps, data, and devices, across any cloud or on-premise system. This deal, explains Greg Schott, Mulesoft Chairman and CEO, enables the two companies to “accelerate our customers’ digital transformations, enabling them to unlock data across application or endpoint.”

The goal of “unlocking data” to help create a consistent, accurate and real-time view of a customer’s application-stack performance is music to our ears. As a market-leader in Value Stream Integration, we are firm believers in creating that all-important ‘single source of truth’ through integration, helping customers to better understand how software is being built to deliver greater value to the business.

“Salesforce’s acquisition of Mulesoft puts integration firmly on the map,” explains Nicole Bryan, VP of Product, Tasktop. “The message is clear; integration matters – matters big time – and is clearly at the heart of any business function.”

Yet there’s more to integration than just connecting systems. “What matters most with integration is the why; organizations need to ask themselves what their business case is for integrating,” continues Bryan. “To truly drive efficiencies, you need to look at how data is flowing across the whole of IT – including the software delivery process.”

Bryan adds: “While stronger data fidelity between Salesforce and Mulesoft will improve the quality of customer data in relation to their products, this data won’t be as effective if the value is lost during the development of an application. Integration matters at every point, from customer-facing systems to the process that creates those systems.”

For instance, if a customer request for a product feature is logged into Salesforce, but not automatically flowed to the next stage (say to a product owner’s tool such as Targetprocess), then the quality of the data begins to depreciate. Therefore it’s crucial that enterprises think about how value flows across their organization and how integration can optimize that flow at critical points. Customer requests don’t get held up in an email somewhere, or corrupted by human error.

Check out our explainer blog on how we can help you optimize customer success by integrating CRM tools such as Salesforce with all other tools your software delivery value stream.

Tasktop connects the network of best-of-breed tools (including Salesforce), roles, teams and processes used for planning, building and delivering software at an enterprise-level. The backbone for the most impactful and largest Agile and DevOps transformations worldwide, Tasktop enables organizations to connect their software delivery value stream for end-to-end automation, visibility, traceability and control over the whole process.

With the ability to support hundreds of projects, tens of thousands of users and millions of artifacts, Tasktop automates the flow of product-critical information between tools to optimize productivity, collaboration and adaptability in an unpredictable and fast-paced digital world.

Request a personalized demo to see how we can help you with your integration strategy to drive IT efficiencies and support your digital transformation.

The post Salesforce acquires Mulesoft to put integration firmly on the map appeared first on Tasktop Blog.

Designing an API that works with Tasktop: The object model

Tue, 03/20/2018 - 08:28

Designing a REST API that works well with Tasktop is challenging. There are a lot of subtle things that can go wrong during the API design phase that can make it difficult for Tasktop to flow data to and from an ALM system.

A first-class entity is an object that the API allows you to directly manipulate. Let’s consider the case where your ALM system has two first-class entities: Tickets and Users. Tickets are the items that represent a unit of work that needs to be done. Users are the people involved in doing the work such as software developers, managers, and QA testers.

First-class entities

Every first-class entity in your system needs to have a stable and unique identifier. For a work item such as a Ticket, the ID could be a number or a UUID.  For a User, the ID is often the username or the email address.

The API documentation should be clear about what the first-class entities are, how they are referenced, and what data they carry with them.

The ID of an entity allows us to uniquely identify it and it must be guaranteed to not change during the lifetime of the entity. The summary is often a human-readable description of that particular entity. Created and modified dates are useful for being able to query on issues that pertain to a particular time period. The ‘version’ property allows Tasktop to tell whether or not the entity has been changed since the last time that it has been seen.

Work Items

A Ticket in your system may be represented like this:

{ “id”: 12401, “summary”: “Update the website”, “description”: “We <b>need</b> to update the website”, “assignee”: “rsantinon”, “status”: “OPEN”, “created”: “2017-04-28T19:10:22.0070000Z”, “modified”: “2018-02-12T11:15:22.0070000Z”, “version”: 1 }

Tasktop also requires an abstract definition of the object model which specifies what the data type and constraints of each field are. For example, the API call:

GET      /api/tickets/metadata

Might return a JSON object like this:

{“fields”: [ “id”: {“type”: “number”, “readonly”: true}, “summary”: {“type”: “text”, “readonly”: false}, “description”: {“type”: “html”, “readonly”: false}, “assignee”: {“type”: “user”: “readonly”: false}, “status”: {“type”: “select”, “readonly”, false}, “created”:        {“type”: “datetime”, “readonly”: true}, “modified”: {“type”: “datetime”, “readonly”: true}, “version”: {“type”: “number”, “readonly”: true} ] }

This field schema states what the data type is and whether or not the field is read-only. In this example, users of the API are allowed to change the summary, description, assignee and the status but not the other fields. Tasktop will use this data to determine that, for example, it makes sense to synchronize another user into the ‘assignee’ field, but it doesn’t make sense to synchronize a date into the ‘created’ field.

In this example, the only constraints on a field are whether it is read-only or not but in a real ALM system there are a variety of different types of constraints:

  • Requiredness: whether or not a field is required to be set upon creating a ticket. Often the summary and status fields are required, but a description is not.
  • Data ranges: date fields may only accept dates within a certain range and number fields often only accept numbers represented by a certain number of digits
  • Character restrictions: which special characters are allowed in which fields.
  • Relative date restrictions: If a ticket has both ‘planned start’ and ‘planned end’ date fields, then it makes sense that the end must be a later date than the start.

It is important to provide field constraint information either through the API (as in the metadata call above) or in the documentation. Developers integrating with your product don’t want to have to use trial and error to determine which values can go into each field. If your ALM system allows users to create custom fields, then it is necessary to provide the custom field metadata through the API so that Tasktop can integrate with it.

Users

The object representing ALM users are usually quite simple. Tasktop requires the user API to provide only a unique identifier. Usually user objects also contain a display name too.

Requesting a user object might return something like this:

{ “username”: “rsantinon”, “name”: “Rylan Santinon”, “email”: “rylan.santinon@tasktop.com”, “active”: true }

The schema of this object is pretty self-explanatory. It contains the ‘username’ as the unique identifier and the name, email address and an active flag.

The ‘active’ flag is a feature that that allows “soft deletion” of users which solves the problem of what to do with users who have left the company. Deleting the user causes problems because then you would have work items that reference an object that doesn’t exist in the system.

Tasktop can use the users’ name and email address in order to automatically match users in two different ALM systems.

We love talking about designing APIs and value stream integration, chat to us today about any questions you have!

The post Designing an API that works with Tasktop: The object model appeared first on Tasktop Blog.

The False Dilemma: Do I invest in release automation or tool integration?

Thu, 03/15/2018 - 13:42

CIOs may have more dollars to play with in 2018 – enterprise IT budgets are expected to rise by 3 percent – but that doesn’t make their day job any easier. They still need to extract more value from IT to help drive the business. But with all the technology and noise in the market, this can be tricky, and making the right investments gets harder by the day – especially with their overwhelming workload.

To support a CIO’s decisions, they rely heavily on IT managers/Central IT to make the right investments for them. Yet the latter is often presented with a dizzying array of false dilemmas – such as whether to invest in release automation or value stream integration. Fortunately for this particular dilemma, the answer is both. Investment in the two technologies increases their individual value, their combined value, and the value they provide to all the other tools used in building, planning and delivering software at scale.

Release automation is critical component within DevOps, sitting in the center of the CI/CD (continuous integration/continuous development) pipeline. Release automation tools streamline and automate activities between code commit to production, helping shift software products faster. The fact that 50 percent of leaders will look to implement at least one release automation solution by 2020 (an increase of 15 percent from today) reflects its critical role in the software delivery process.

CI/CD stage, however, is only one part of the software delivery process (albeit a extremely vital one) – what about everything that happens before? What about the activities that ensure the right product is being built so that it will provide value to the customer? It doesn’t matter how fast you release a product if it’s not what the end user asked for.

Consider a dumbwaiter at a restaurant. The invention and automation of the dumbwaiter has dramatically streamlined food service, bypassing manual transportation across multiple levels of a building.

Reducing lead time from frying pan to the customer table, not to mention reducing the stress on the knees of busy servers, the dumbwaiter should ensure steaming food arrives at a diner’s table at a calculated time – no matter how busy the restaurant is. Yet that is only one potential bottleneck.

What about all the other steps that deliver a quality dining experience? The table booking? The drink and food order? How are the latter communicated? How does a head chef effectively communicate with the front of house and kitchen staff? A chef can only cook to order – if the order is wrong, it simply doesn’t matter how quickly the table service is. The customer will be unsatisfied, and the restaurant’s Tripadvisor page will likely take a very public pasting. 

Integration solves this issue – connecting, automating and streamlining the whole process from request to delivery. CIO’s and IT managers obtain visibility from end-to-end, and all steps can now be traced and measured to continuously improve performance.

For a much deeper understanding into why organizations should look to invest in both release automation and value stream integration – as well as the need to invest in specialist tools at all key stages of the software delivery value stream – read Tasktop’s Naomi Lurie’s latest article Why you need enterprise toolchain integration alongside release automation.

Want to know more? Request a personalized demo of how you can integrate your release automation tools with the rest of your software delivery value stream to build better products faster.

The post The False Dilemma: Do I invest in release automation or tool integration? appeared first on Tasktop Blog.

Why we need to talk about contingency plans in software development

Mon, 03/12/2018 - 09:09

The other day I read an article on why fighter pilots know that quick reactions are for losers. The gist of the article is that you need to respond to a situation, instead of waiting to react to it. The author explains that a pilot’s response is based on countless hours of experience, planning, and thinking through the all-important question of, “what will I do when X happens?”. The piece got me thinking about the insurance measures we take in enterprise software delivery, as well as inspiring three very different trains of thought.

Train one

It’s not that pilots simply have supernatural reflexes, it’s that they’ve trained to know what to do in most situations. As a guy who served for a few years underwater on a nuclear submarine, I can relate to that. We ran endless drills so we would know how to respond instead of react.  Although a submarine and a fighter jet are very different beasts, they’re both (relatively) small tubes operating in hostile environments where a small mistake can be fatal.

Train two

This article also got me thinking about the old idea of a court jester. A court jester is commonly thought of as an entertaining clown. Someone to make the king or queen laugh. Occasionally, they served as a counterpart to the yes-men of the royal court. A jester could deliver news no one else dared to. Imagine if we tasked someone to take on this role at a company. What would they say to the CEO? What sort of truths could they say that no one else wants to?

Train three

The same day I read the article, I received an insurance bill in the mail. Now that my car is paid for, I don’t have the bank telling me how to insure my car. However, I’m still carrying more insurance than legally required. Why? Well, the reason I have my current car is because the last one was totaled. I was driving on the highway in the exit lane. The exit was over a quarter of a mile away, but traffic had backed up quite a ways and had come to a stand-still.  I stopped in the line of cars, but the car behind me didn’t. I was sandwiched and my car was a complete loss. Luckily, I had insurance and was able to replace my car a few days later.

All three of these trains of thought are telling us about contingency and being realistic about what a business may face. That it’s vital to spend time, money, and effort into thinking and planning for things that we don’t want to think about.

You don’t want to think of an engine failure while flying; you don’t want employees to point out ugly truths that go unspoken; and you don’t want your car wrecked right after quitting work to go back to get an MBA.

I like to think of all these scenarios in terms of insurance. Insurance is an expense you are willingly to pay now in the hopes that you’ll never need it. But if you do, it will drastically reduce the negative impact of whatever happens. Often we think of insurance as a policy provided by another company. In reality, however, insurance is whatever we do to mitigate potential loss.

As an industry, we are very focused on maximizing gain. We prioritize features by their potential revenue versus effort. We spend our time on building the best features that will generate the most revenue. Very rarely do we stop and think about how much of our resources should go to minimizing losses.

We spend the majority of the time thinking and working towards making the most money, yet don’t spend enough time thinking about how to protect that money in the inevitable face of uncertainty. Insurance isn’t cheap. Paying for insurance shrinks your bottom line. The resources spent on insurance could easily be appropriated for more profits. And likely the biggest cause of avoiding the cost of insurance isn’t the actual costs, it’s the pain of thinking about bad things. Deciding to insure against a risk means evaluating unfortunate scenarios and really asking, “what should we do?”. That can be a scary thing.

It’s not a matter of if something unexpected will happen. It’s only a matter of when it will happen. And when it does, do you want to react to it? Or do you want to respond to it?

Think of these as the unknown-unknowns. Things that fall well outside of the typical feature complexity challenges we deal with.

Here are some scenarios to think through:

  • What if company machines get hacked and we lose customer data?
  • What do we do if a huge customer request comes in after we’ve committed to one course of action?
  • What do we do if we get sued?
  • What do we do if a key team member leaves?
  • What’s our response to a sexual harassment complaint?
  • What if our servers all die?
  • What if our task tracking tool dies? How do we keep working? Or do we?

I’m not in any way suggesting a majority of our time should be spent on these issues. I do suggest that some amount of time be set aside to work through these scenarios with a cross-functional team. The answers don’t have to be perfect, nor will the scenarios exactly match what happens in real life. But three things will happen:

  • Your organization may just have some answers before you need them
  • The act of practicing how to answer these hard questions will make it easier to do when you encounter something unexpected
  • Your organization will take a stand on what risks they’re willing to accept and that will need a contingency plan

The biggest drawback to this style of thinking is that it takes time away from the day-to-day push to move the company forward. Insurance is a cost, and it’s time-consuming. It’s also intellectually painful to think of what can go wrong. And the biggest drawback of all is that we don’t want all this time and effort to be wasted. We just blindly hope that everything goes according to plan.

And just like with real insurance, when the metaphorical scat hits the fan, you’ll be thanking past-you for going through the pain of creating a response instead of waiting to simply react. That way, like a fighter jet, your product delivery will continue to soar against unpredictable winds.

The post Why we need to talk about contingency plans in software development appeared first on Tasktop Blog.

What I took away from Women In Product: Austin 2018

Thu, 03/08/2018 - 08:00

Happy International Women’s Day!

Last week, Tasktop sponsored the inaugural ‘Women in Product: Austin’ event in collaboration with the Ann Richards School for Young Women Leaders. The event was geared towards women at different stages of their career, providing an opportunity to network and learn about the challenges, highlights, and the day-to-day life of being a woman in the field of Product Management.

From left to right (in the back): Ezinne Udezue (Senior Director of Product at Bazaarvoice), Alyson Baxter (Director of Product at Cratejoy), Nicole Bryan (VP of Product at Tasktop)

Bringing together a total of 40 industry professionals, as well as approximately 20 students from the Ann Richards School, the event centered around a speaking panel of five accomplished women that had been through the struggles and successes of a career in Product. The panel helped provide insights that, I find, tend to slip our minds when we work or study.

The panelists shared tips that not only helped them get to where they are in their careers, but also what keeps driving them to succeed. As a young professional and recent graduate, I felt I could relate to both the students and the working professionals in the room. It was interesting to observe that much of the insights shared were applicable to everyone in the audience, regardless of the stage they were at in their career.

The piece of advice that resonated with me most was shared by panelist Tulsi Dharmarajan, VP Product & Design at Verb. Tulsi emphasized the need to have mentors. Not just one, but multiple mentors that could provide guidance in different aspects and areas; from learning soft skills, to someone who could provide guidance on the challenges unique to women, to mentors who can offer perspective that will help shape your career to fit you.

From left to right: Amanda de la Motte (Director of Product at CognitiveScale),
Ezinne Udezue (Senior Director of Product at Bazaarvoice), Alyson Baxter (Director of Product at Cratejoy), Nicole Bryan (VP of Product at Tasktop), Tulsi Dharmarajan (VP Product & Design at Verb), Heather Le (Product Management Consultant), Rebecca Dobbin (Product Content Manager at Tasktop)

The value of this really hit home when Tulsi shared how she continues to have multiple mentors guide her, even though she has built a successful career. Finding mentors, and learning from them, is something that as a young woman in Product I tend to put on the backlog and forget about.

One skill that was repeatedly emphasized was the importance of knowing how to communicate well. Being a part of a Product team, it is essential that we know how to communicate with customers, partners, and within our own organization to effectively progress the product roadmap. This is a skill that is critical from the early to later stages of a career in Product Management.

Though my key takeaways are focused on more factual insights, what made this event different from the other work conferences that I’ve attended was the energy I felt being in the room. It was uplifting. I felt connected with the other women there. There was a sense of eagerness to learn, overcome challenges, and succeed as women in the tech world.

As my colleagues and I drove back to work after the event, I felt a deep appreciation for organizations such as Women in Product that create opportunities for women to connect with each other – especially those working in male-dominated industries.

As the world of software and tech continues to evolve and grow at a rapid pace, it’s fantastic to see so many women at the heart of it. The future looks bright, especially with so many great and inspiring movements driving change. That includes today’s International Women’s Day, a day dedicated to the social, economic, cultural, and political achievements of women, in addition to reflection on the progress made to accelerate the gender parity and the advocacy for change that is still needed in today’s global society.

Check out our blog from last year for some inspiring quotes about gender equality from Tasktopians past and present.

The post What I took away from Women In Product: Austin 2018 appeared first on Tasktop Blog.

11 IT tool integrations to optimize your enterprise software delivery

Thu, 03/01/2018 - 14:50

Time and time again, you’ve heard that the world is digital and that every company is a tech company. That software delivery is what gives you a competitive edge. And that you need all the right tools, people and methodologies (Agile, DevOps etc.) to accelerate the speed of delivery and quality of your software products.

You’ve probably also heard that Value Stream Integration is the missing piece – the secret sauce – behind all the best IT transformations in the world. That connecting your best-of-breed tools for planning, building and delivering software at scale, and automatically flowing project-critical information between practitioners, is absolutely vital to optimize the process.

That if all your specialist teams are to collaborate efficiently and effectively when scaling operations, they need to be working as one. That all work must be visible, traceable and manageable, with no costly manual work required to share important knowledge about a product’s status and development. But what does that all look like in reality?

We’ve analyzed over 300 leading enterprises – all high-performing IT organizations – to identify similarities between their software delivery value streams. What we found was that these enterprises all realize the massive value of end-to-end process automation beyond DevOps and the CI/CD pipeline.

In our latest webinar, we discuss the compelling insights that we have gleaned, including:

  • How IT tool integration accelerates enterprise software delivery
  • How to implement 11 popular tool integration patterns
  • Strategies to reach integration maturity through chained integration patterns

We also share the results of an analysis of 1,000 tool integrations, including how IT organizations are implementing a sophisticated integration infrastructure layer to automate the flow of work from ideation to production.

If you missed the live webinar, just click on the link below:

Want to know more? Contact us and/or request a personalized demo today to see how Value Stream Integration can have you competing with the best in no time.

You can also read more about our research in our press release Tasktop Research: Largest Enterprises Now Extending DevOps Process Automation Beyond Continuous Integration/Continuous Delivery.

The post 11 IT tool integrations to optimize your enterprise software delivery appeared first on Tasktop Blog.

The end of the manufacturing line analogy

Tue, 02/27/2018 - 14:18

This piece was originally published in the November/December 2017 issue of IEEE Software.

I recently visited the BMW Group’s Leipzig plant. My goal was to brainstorm with BMW Group IT leaders on how we could seamlessly integrate production lines with the software lifecycle. The visit involved a 10-km walk along the plant’s production lines, with plant leadership explaining each system, process, and tool involved in car production. That visit impacted my understanding of lean manufacturing more profoundly than all the books I’ve read on lean processes.

The plant is an incredible facility that leads the industry in technology and sustainability, producing a BMW 1 or 2 Series car every 70 seconds. It also houses the amazingly innovative i3 and i8 production lines. Walking into the Central Building (see Figure 1) combines the sense of watching a starship construction facility with the feel of a large tech startup. Open offices sit below an exposed part of the production line that moves cars from the body shop to the bottleneck of the plant (more on that later) and then to the assembly building.

Figure 1. The Central Hub Building of the BMW Group’s Leipzig plant. The plant
produces a BMW 1 or 2 Series car every 70 seconds. (Source: The BMW Group; used
with permission.)

As I asked the plant leadership hundreds of questions, my mind raced trying to draw parallels between how cars and software are built. The combination of robots and humans working in unison was a glimpse into the future of how human skill will be combined with AI and automation. But what impressed me the most was the plant’s architecture, which demonstrates an elegance and modularity any software architect would envy.

In Figure 2, the assembly line’s key stages are visible as the “knuckles” of the five “fingers” growing out to the right. Each finger is a key fixed point in the production line’s architecture, with the buildings growing outward as manufacturing steps are added and as technologies and customer demands evolve. I had never imagined that the principles I associate with software architecture could take such a physical, monumental form.

Figure 2. A drone photo of the BMW Group’s Leipzig plant. The manufacturing line’s
key stages are visible as the “knuckles” of the five “fingers” growing up from and to the
left of the middle of the plant. (Source: The BMW Group; used with permission.)

I spent the months after my visit thinking about how to apply the plant’s manufacturing innovations to rethinking how software is built. How do we emulate the visibility that the rework area provided? How do we align our software architectures with the value stream the way the Leipzig plant has done, from the building’s structure to the placement of each stage and tool? How do we blend automation and human work this seamlessly? And, most importantly, how do we make the key bottleneck of a software process as visible to our staff as the plant has done?

Then I had a lengthy talk with Nicole Bryan, Tasktop’s Vice President of Product Management, who convinced me that I was thinking about this all wrong.

Software Development Isn’t a Manufacturing Process

One of the most impressive things about the Leipzig plant is the large-scale implementation of just-in-time inventory. Even more interesting is that the cars are manufactured just-in-sequence: cars come off the production line in the same order that the customer orders come in. While the many stages are impressive, seeing the optimizations in the end-to-end process was nothing short of mind blowing.

The concept of pull is core in any lean system. For manufacturing, pull is the sequence of customer orders for physical widgets. At the BMW Group, the widgets are cars that meet market demand once they delight the customer with “sheer driving pleasure,” a phrase posted between throughout the plant for staff to see. If more 1 Series than 2 Series cars are delighting users, more 1 Series cars come off the line, and the line’s tooling and processes adapt to the new demand.

As I walked the factory floor, the Zen koan stuck in my head was trying to figure out what these “flow units” would be in an amorphous software delivery process. Taking inspiration from the BMW Group’s emphasis on “sheer driving pleasure,” we might conclude that those flow units should be something that delights our end users. Yet we know that with the days of shrink-wrap and compact-disk stamping far behind us, developers delight nobody by shipping the same piece of software again and again.

Lean thinking is about letting the customer pull value from the producer. So, widgets in software should be units of business value that flow to the customer, producing some combination of delight, lack of annoyance, and revenue. I’ll tighten the definition of these flow units in a future column; for now, consider them to be features added, defects fixed, security vulnerabilities resolved, and similar units of business value that customers want to pull. Yet no two of these flow units are ever the same.

And here we see a core difference. Whereas a car-manufacturing plant aims to churn out the same widget in various configurations with the highest speed, reliability, and quality possible, software development organizations crank out a different widget with every feature delivered. Determining what those features should look like is similar to the BMW Group designing its next car. But in high-efficiency software shops, it happens at a weekly or an hourly, not a yearly, cadence.

If you have a constrained set of inputs and want to produce high-quality widgets, your best bet is to creata completely linear batch-style process,the ultimate example of which is a car production line. But if you’re cranking out a different widget every time, and defining that widget’s size and shape is a creative process, a linear process is a wrong and misleading model.

Pitfalls of the Wrong Mental Model

As scientists, engineers, and technologists, we do well by reducing complex problems to simpler ones. But consider some of the mis-steps we’ve taken in past attempts to improve large-scale software delivery. Waterfall development looked great in theory because it made linear the complexity of connecting all the stakeholders in software delivery. Agile development came to the rescue but over-simplified its view of delivery to exclude upstream and downstream stakeholders such as business analysts and operations staff. DevOps addressed that by embracing operations, automation, and repeatability of deployment processes. But I now fear that by over-focusing on linear processes rather than the DevOps tenets of end-to-end flow and feedback, organizations are about to make similar mistakes by adopting an overly narrow and overly linear view of DevOps.

The ability to stamp out frequent releases in an automated, repeatable way can be a great starting point for DevOps transformations. But that’s only a small step in optimizing the end-to-end software value stream. The theory of constraints1 tells us that investing in just one segment of the value stream won’t produce results unless that segment is the bottleneck. But how do we know it’s the bottleneck? Even more important, what if we’re looking for a linear bottleneck in a nonlinear process?

Software development comprises set of manufacturing-like processes. Taken in isolation, each can be thought of as batch flow in which automation and repeatability deter-mine success. For example, in the 1970s, we mastered software assembly, with compilers and systems such as GNU Make providing batch-style repeatability for building very large codebases. In the following decade, GUI builders and code generation became an automation stage we now take for granted when building mobile UIs. Now, we’re in the process of mastering code deployment, release, and performance management, making frequent releases a reliable and safe process. However, each of these is only a single building block of an end-to-end soft-ware value stream, analogous to the various stages of robots that form, weld, and assemble a car. But with software, these various stages don’t combine to form the simple one-way batch flow of a production line.

If we could take a virtual MRI of the workflows in a large IT organization, similarly to viewing a moving x-ray of the BMW Group plant from above, what underlying structure would we see? I’ve done this for my own organization and for our clients’ organizations, and the resulting visualizations look nothing like an assembly line. But they do bear a fascinating resemblance to the airline network maps at the back of in-flight magazines. If you imagine the visualization of the flow of airplanes over time, adapting to route changes or bottlenecks due to severe weather and delayed crews, you’re starting to get the picture.

If we try to map an IT organization like an airplane network, what are the nodes? The routes? How do we map the flows of features and fixes across projects, products, and teams? I’ll examine this more closely in an upcoming column. For now, I propose that this network-based model is more representative of software development, and that by reducing software development to linear manufacturing paradigms, we’re pursuing the wrong approach. The process of identifying a linear batch flow’s constraints differs greatly from optimizing a network’s flow.

More Like Routing Airplanes Than Manufacturing Cars

At its core, the end-to-end software lifecycle is a business process that delivers value to end users. As such, the principles that James Womack and Daniel Jones listed in their summary of lean thinking very much apply:

Lean thinking can be summarized in five principles: precisely specify value by specific product, identify the value stream for each product, make value flow without interruptions, let the customer pull value from the producer, and pursue perfection.2

Many lean concepts are relevant when we’re shifting our thinking of flow from an assembly line to a net-work, such as small batch sizes and one-piece flow to minimize work in progress. However, to avoid over-applying manufacturing analogies—or worse, continuing down the path of the wrong mental model—we must more clearly define the key differences between managing the iterative and network-based value streams of software development and man-aging the linear value streams of manufacturing:

Variability. Manufacturing has a fixed, well-defined set of variations for what will emerge from the end of the line, whereas new software features are open ended. Manufacturing needs to minimize variability; software development needs to embrace it.

Repeatability. Manufacturing is about maximizing through-put of the same widget; software is about maximizing the iteration and feedback loops that drive innovation. We need repeatability at each stage of software delivery, such as reliable automated deployment, but we’re trying to optimize more for flow and feedback than for repeatability.

Planning frequency. Cars are designed up-front in waterfall cycles spanning years. Modern software organizations usually plan delivery using a two-week sprint cadence. This means we must design our value streams for frequent planning and change.

Creativity. Manufacturing processes aim to achieve the highest feasible level of automation, which is facilitated by removing any creative and nondeterministic work from the production process. Creative work shifts to defining and tuning the production process itself. We see some of this in software. For example, defining the value stream from planning through deployment can be a bigger technical challenge than coding a new feature. However, even with the upcoming major advances in automation AI, we’ll still be left with creative work and collaboration at each step of the software value stream.

Visibility. What makes soft-ware so interesting is that it’s not subject to physical manufacturing constraints, making it almost indefinitely malleable. This means that adaptation to a market’s needs can happen at a dramatic pace. However, the lack of physical bits makes gaining visibility of flow and output a fascinating challenge, in contrast to how explicit this is in a car-manufacturing plant. Just as we had to invent micro-scopes to understand the inner workings of a physical world our eyes couldn’t see, we now need a new set of tools to understand and manage intangible software value streams.

If you buy into the notion of soft-ware value streams forming a net-work and into the airplane traffic analogy, we must also consider what makes for robust, efficient networks, ranging from route optimization to flow control. For example, to optimize a network, we must consider the following:

Throughput. We can measure a network’s effectiveness as throughput—for example, how many passengers can be trans-ported along certain routes. Where in an IT organization should we invest to gain the highest increase in overall throughput?

Latency. Latencies are easy to deal with in a linear process, but what about the scenario in which one feature must be implemented by both a front-end and a back-end team? Does out-sourcing to distant time zones increase latency? How do we measure overall network latencies and end-to-end lead times to increase time to market?

Resiliency. A robust network assumes that nodes can fail while flow remains. How does this relate to a failed product or an insurmountable tech­nical debt?

Finally, Metcalfe’s law tells us that a network’s value grows with its connectedness. If our value stream network has insufficient connectedness, is there any point in optimizing any particular stage? For instance, assume that no formalized feedback loop exists between operations and service desk staff working with an IT service management tool such as ServiceNow and developers cod-ing in an agile tool such as Jira and planning releases in Microsoft Project. In this case, will investing millions into continuous delivery produce any measurable business benefit? When a company’s competitiveness in the market is on the line, ad hoc answers to these questions don’t suffice. We need a more robust model for software value stream networks; this is something I’ll explore in an upcoming column.

The Leipzig plant’s bottleneck is the Paint Shop. Although the station employs cutting-edge high-voltage curing, changing the paint color and drying time in big ovens takes well over the 70 seconds I mentioned earlier. The resulting need to re-sort the cars by the desired color, batch them into the dreaded inventory, and reorder them into the just-in-time sequence is the incredible mechanical ballet that takes place above the plant’s lunchroom in the ultimate tribute to value stream visibility.

As I walked out of the Leipzig plant, my perspective was transformed by the ingenuity, innovation, and managerial sophistication that the BMW Group has attained. It’s now time for us to lay down the groundwork and new mental mod-els that will let us attain this kind of  precision,  perfection,  and  flow

Acknowledgments

I’m grateful to Frank Schaefer and Rene Testrote for arranging the visit and reframing my perspective.

References

1. E.M. Goldratt and J. Cox, The Goal: A Process of Ongoing Improvement, North River Press, 2014.

2. J.P. Womack and D.T. Jones, Lean Thinking: Banish Waste and Create Wealth in Your Corporation, 2nd. ed., Productivity Press, 2003

The post The end of the manufacturing line analogy appeared first on Tasktop Blog.

Maximizing Jira – understanding your centers of criticality

Thu, 02/22/2018 - 10:27

If like most enterprises your organization builds its own software, it’s highly likely your developers are using Jira to plan their work. But are you using it to its full potential?

That depends on how you’re using Jira within the context of your software delivery value stream. The planning, building and delivery of software products at scale requires a complex network of specialist roles, tools and methodologies. And it’s how these elements work together that maximizes the value of Jira and all the other systems that you employ.

Just like developers need their own purpose-built tool in Jira, all other specialists in the software delivery process require their own specialist tool. This is because the role functionality they require either isn’t in Jira, or not to the level they need. Product need a proper product management tool, project managers need a proper project management tool, test needs a proper test management tool, and so on.

It’s important to remember that plugging too much workflow into Jira can flood the tool and undermine its productivity powers. As a ‘center of criticality’ in the value stream – Jira ties the developer to the working software in production and the original business need – it’s vital that all product-critical information can flow seamlessly through the tool, and work with all other centers of criticality.

To do that, you to identify your centers of criticality, which are systems that create product-critical information (artifacts such as requirements, features, epics, stories, tests, etc.). Then, you need to connect them to Jira so that all key information that pertains to a product’s development and delivery is accessible to the key stakeholders in the process.

To learn more about what a center of criticality is and why you should care, read our VP of Product Management Nicole Bryan’s DevOps.com article The role Jira plays in complex value streams.

For further information on Jira’s role in enterprise software delivery and why there’s no ‘one tool to rule them all’, download our white paper on why Jira works best in an integrated best-of-breed tool stategy.

Want to know more? Request a customized demo to connect your centers of criticality to extract even more value from your favorite tools.

The post Maximizing Jira – understanding your centers of criticality appeared first on Tasktop Blog.

How to sharpen your competitive edge in a digital world

Tue, 02/20/2018 - 09:30

Competitive advantage in a digital world hinges on how fast you can deliver the right software products to internal and external customers. By “right” software, we simply mean a product (i.e. a set of features) that delivers value to a customer’s business.

It’s therefore logical to optimize the enterprise software delivery process for:

  • Faster time to value
  • Higher quality products
  • Premium digital experiences
  • Increased productivity
  • Reduced production overhead
  • Tighter customer feedback loops
What is blunting your competitive edge?

While a typical software value stream comprises the very best people, best-of-breed tools and methodologies (such as Agile, DevOps, and other IT transformations), many organizations are still not seeing the results they want. Despite their best intentions and high investment, they’re still playing catch up with nimble digital disruptors who have software in their DNA.

Either software is too slow out of the door, there are too many defects to get in production, or the product delivered is not what the customer asked for. And when these mature organizations do attempt to analyze the process to try to measure and improve it, they find it difficult to even pin down what they should be looking for.

A time- and cost-intensive IT audit may bear some fruit – but only if you have the time and budget to conduct such a labor-heavy endeavour.  If, like many organizations, you simply don’t have the capacity or resource for such a big and disruptive initiative, there is an easier solution.

Sharpen through Value Stream Integration

Through Value Stream Integration, organizations can create a modular tooling infrastructure that connects all tools and specialist roles in the software delivery value stream to make the process visible, traceable, measurable, and manageable.

Any bottlenecks and opportunities to improve efficiencies can be quickly identified in a few clicks through a dynamic visual interface. And CIOs can systematically increase speed to delivery; increase team capacity; improve product quality; and optimize for business outcomes.

Download our new white paper on the topic to gain a clear introduction into how Value Stream Integration optimizes enterprise software delivery and continuously sharpens your competitive edge.

And for more a personalized look into how integration can help your business, request a customized demo of your favorite tools being integrated to see how you can accelerate your time to value and yield tangible business results.

The post How to sharpen your competitive edge in a digital world appeared first on Tasktop Blog.

Why all CIOs should be prioritizing Software Delivery Value Stream Integration

Thu, 02/15/2018 - 10:29

Waiting 12 months to integrate your software delivery value stream can actually cost an organization up to $10 million a year in productivity overhead

The modern enterprise has to consider and prioritize a dizzying array of IT business initiatives. Many of these decisions fall to the CIO – after all, they’re responsible for leading an organization’s digital transformation.

That’s a lot of pressure. While CIOs know that leveraging their software delivery to the hilt is what gives them a competitive edge, often they put tool integration on ice. “We’ll come back to that in 12 months once we’ve sorted everything else!”

Yet waiting 12 months can actually cost an organization up to $10 million a year in productivity overhead (based on a typical 1500-person development team). Waiting costs money – and a CIO their job.

Fortunately, Value Stream Integration can actually alleviate the pressure on a CIO and enhance other IT initiatives. Explore the infographic below to see why:

Want to know more? Download our white paper on the topic to better understand why you need to integrate your software delivery value stream.

You can also request a customized demo of your tools to visually see how Value Stream Integration enables you to see, measure and optimize your software delivery process.

The post Why all CIOs should be prioritizing Software Delivery Value Stream Integration appeared first on Tasktop Blog.

CIOs – are you measuring the right DevOps data?

Tue, 02/13/2018 - 10:01

“When digital transformation is done right, it’s like a caterpillar turning into a butterfly. When done wrong, all you have is a really fast caterpillar.” – George Westerman, Principal Research Scientist with the MIT Sloan Initiative on the Digital Economy

Has there been a more exciting yet challenging time to be a CIO?

Bridging the gap between the business and IT, a CIO is typically responsible for an organization’s digital transformation by leveraging technology and data to enhance business performance.

Given what is at stake, a CIO can seem an unenviable job – Gartner predicts that by 2020, 50 percent of CIOs will not have transformed their teams’ capabilities will likely to be out of a job. Time is of the essence, and CIOs need all the help they can get – especially when it comes to the mysterious world of software delivery.

Competitive advantage in a digital world rests on an organization’s ability to rapidly build and deploy software products that deliver business value, i.e. enhance the speed and quality of business processes. Yet enterprise software delivery is one of the most technically-complex business practices that an organization can face. It requires sophisticated coordination of processes and data created by different specialists who work in disparate systems. In many ways, this environment is a CIO’s worst nightmare.

As all leaders know, you can’t improve what you can’t measure; you simply must have real-time insights for real-time understanding into how a process is providing business value and supporting a digital transformation initiative. A coach can’t make game-winning plays if he can’t see the game.

To this end, visibility and measurement are paramount to creating and managing an effective software delivery stream and optimizing DevOps initiatives. Measurement, however, is extremely difficult. How do collect data that is complete, comprehensive and accurate when it exists in pockets all over the place? How do you analyze the impact of technology on people? The key is a combination of both survey and system data.

In this white paper for acmqueue – co-authored by Tasktop co-founder and CEO, Dr, Mik Kersten, and Dr. Nicole Forsgren, CEO and Chief Scientist at DevOps Research and Assessment (DORA) – CIOs can learn how to measure the right DevOps metrics to use their digital transformation to turn their organization into a butterfly.

Want to know more about how technology and data work in software delivery? Check out the below video to discover how you can flow all data from the software delivery value stream into one place to easily glean insights and improve your IT performance.

Want to take the next step? Call us today for a chat about how Tasktop can help CIOs with their digital transformation.

The post CIOs – are you measuring the right DevOps data? appeared first on Tasktop Blog.

How to minimize conflict in enterprise software delivery with value stream architecture

Fri, 02/09/2018 - 08:40

More features out the door, faster time to market, fewer defects, and shorter time to value: it’s widely accepted that you have to deliver better software products faster to gain that all important competitive edge. That’s why you’ve invested in DevOps, Agile, best-of-breed tools and specialist people, often to the tune of millions of dollars.

And there’s been improvements – a few faster products out the door, a few teams working in harmony a bit more – but it’s still not enough. Competitors are still better than you at delivering software on an enterprise-scale, while nimble digital-native start-ups always seem two steps ahead. Why is this happening? Why aren’t you yielding a tangible ROI?

Value Stream Architecture

The answer may lie within your software value stream architecture. When you take a step back, you realize that while all the components are dependent on each other, they’re not actually working very well with each other. Only once you study how these tools and their users work together, do you realize that they’re not functioning as one. That your architecture is missing pipes and beams, and that there’s wires hanging from the ceiling. 

That there’s constraints and conflict at every turn, plaguing and disrupting all stages of the software delivery value stream. That the flow of work is stymied, vulnerable to decay or misdirection, and often not visible – it’s tantamount to building a car in a broken factory. In the dark. Everything and everyone is paying the price, from the people who build the product to the end product itself. Your business is taking a hit too.

How to minimize conflict

The key, then, is to minimize all this conflict by adopting a constraints perspective to identify and address flow-limiting aspects of your value stream. What elements in your process are weighing you down, and how do you alleviate this pressure? In a recent article for SD Times, our VP of Architecture, David Green, explains exactly why architecture is just so critical for a high-performing software delivery value stream, and how you can begin to build a system that works best for your business.

Want to know more? Chat to us today to discuss how value stream architecture can optimize your software delivery at scale and help you stay ahead of your competitors.

 

The post How to minimize conflict in enterprise software delivery with value stream architecture appeared first on Tasktop Blog.

Pages