How leading IT organizations are using Value Stream Integration to generate more business value through software delivery
Earlier this year, Tasktop analyzed 300 value stream diagrams of the largest U.S. enterprises across major industries such as financial services and healthcare to better understand how organizations are delivering software at scale. We wanted to know exactly how global leaders were successfully combating the threat posed by digital-savvy businesses and continuing to be innovative leaders in their field.
While Value Stream Integration was playing a key role in helping them manage and improve their software delivery value streams, we wanted to go deeper than that; beyond the sheer productivity benefits of say, flowing defects between the tools used by development and test teams. We wanted to better understand the business value they were generating through integration.
Through this research, we identified some striking similarities between these global heavyweights in terms of mindset and approach to software delivery.Leaders are doing things differently…
- They’re looking beyond Agile and DevOps and thinking end-to-end to optimize how work (value) flows from customer request to operation and back through the customer feedback loop
- They’re defining, connecting and managing their Value Stream Network through integration and investing in Value Stream Management
- They’re building a modular infrastructure, enabling them to plug teams and tools in and out with disrupting existing workflow
- They’re focusing on end-to-end flow time metrics to identify bottlenecks and efficiency opportunities to optimize the process
- They know there is ‘no one tool to rule them all’. They recognize that they need embrace an integrated best-of-breed tool strategy that provides specialization at each key stage of the process, and works together as one cohesive system.
There are some excellent specialist tools out there that address every facet of the software delivery process. Some of these tools are mature legacy tools, while others are new to market that are equally as popular and powerful.
Some of the most popular tools used by leaders include:
- Micro Focus (HPE) ALM (Test Management)
- Microsoft TFS (Agile Development)
- CA Clarity PPM (Project Management)
- IBM RTC (Agile Development)
- BMC Remedy (ITSM)
- IBM DOORS NG (Requirements Management)
- Atlassian JIRA (Agile Development)
- ServiceNow Service Desk (ITSM)
- CA Agile Central (Rally) (Agile Development)
- Blueprint (Requirements Management)
- Jama (Requirements Management)
- Tricentis Tosca (Test Management)
Leaders recognize that collaboration between practitioners is the linchpin of their software delivery. They also recognize that artifacts are the “currency of collaboration” and that focusing on how this collaborative data is enriched and flowed between tools is critical for faster, better product development.
The most popular artifacts used by leaders highlight the most important stages of the process:
- Story – descriptions of features of a software system
- Epic – a chunk of work that has common objective
- Ticket – information that relates to an issue or problem from the field
- Defect – a bug, failure or flaw
- Test Case – a set of conditions or variables that determine if a system satisfies requirements
- Requirement – what the user expects from a new or modified product
- Feature – the functionality of a software application
Leaders are implementing a similar set of integration patterns. These patterns are frequent and critical interactions between collaborators and tools (via artifacts) at key stages of the software delivery process.
We’ve come a long way from just fixing bugs in code. While the first pattern – and still most popular – is the developer-tester alignment (via defects), we’ve identified a number of integration patterns across our customers value streams that dramatically improve the efficiency of their workflow.
The growth of these patterns reflects the evolution of software delivery as it becomes increasingly more complex and sophisticated as more roles, tools, workflows and artifacts emerge, intertwine and depend on each other.
More patterns will continue to emerge as organizations seek to improve efficiency across the process to make it more manageable and effective, and map their integrations to business drivers. While end-to-end integration must be the end goal as it encompasses the whole value stream and connects that all-important network, you may not be as far behind the leaders as you think.
Our research into the number of tools that organizations are integrating found:The number of tools that respondents are integrating. Types of integration patterns
Below are the 11 common integration patterns that leaders are using. The sophistication of these chained patterns has grown significantly over the last five years:
Why: Brings the people who manage product workflow closer to the people who understand what the customer needs from the product.
Why: Brings the people who understand what the customer needs from the product closer to the people who build it to ensure it delivers value.
Why: Brings the people who manage the products closer to the people who build them to ensure that development is on schedule and on budget.
Why: Brings the people who build the product closer to the people who test the product to reduce defects in production.
Why: Brings the people who log customer product feature requests closer to the people who resolve them.
Why: Enables the people who build the software keep the people who work closely with the customer aware of any known issues (defects) going into production
Why: Brings the people who log customer product issues closer to the people who fix them.
Why: Traces the product journey from the people who plan and design the software to the people who build it. This traceability allows all stakeholders to understand a product’s development across key stages to increase product accuracy, speed of delivery, and helps identify where any issues originated for faster time-to-resolution.
Why: Brings the people who test products closer to those who know what the customer needs to ensure test coverage meets any strict regulations or compliance issues.
Why: Brings all stakeholders involved in the process from outside the organization closer to those inside the organization to ensure consistency of information, compliance of process, and better supply chain collaboration.
Why: The holy grail of reporting is to obtain “one source of the truth.” Yet, it’s so difficult to get this view when critical information relating to a product’s development is siloed in different tools. Integration aggregates all data into one database that can be used for reporting purposes.Common characteristics of leader success stories
- Multiple best-of-breed tools, with no streamlined processes
- Poor visibility into the end-to-end flow of work, making it difficult to measure and improve performance
- Regulated industries that require traceability for safety-critical software that is appropriately tested
- Degraded productivity due to manual work, duplicate entry, and collaboration through email, spreadsheets and status meetings
- Disruptions via acquisitions, mergers, and reorganizations
- Enhanced efficiency and coordination
- Improved visibility and traceability
- Future-proof infrastructure to adapt to evolving business needs
- Value Stream Thinking is vital to the success of Agile, DevOps and other IT transformations
- Enterprises with connected value streams are thriving
- A connected value stream is key enabler in the shift from managing software projects to delivering products and business value
- A sophisticated integration infrastructure is required to bring the value to life
For a more in-depth analysis into the research, watch the below webinar featuring Nicole Bryan (our VP of product) and Chandler Clemence (Product Analyst), where they share the results of an analysis of 1,000 tool integrations to learn:
- How IT tool integration accelerates enterprise software delivery
- How to implement 11 popular tool integration patterns
- Strategies to reach integration maturity through chained integration patterns
Automates Information Flow Across Value Stream
- Enables the frictionless flow of artifacts, as well as information from events across the value stream
- Removes non-value-added work and bottlenecks
- Increases velocity and capacity
- Provides automated traceability
- Dramatically improves employee satisfaction (no manual handoffs etc.)
Enables Value Stream Visibility
- Provides real-time view of product statuses
- Unlocks lifecycle activity data from separate tools
- Automatically compiles data into single database
- Enables management to create dashboards and reports for holistic view of value stream
Creates a Modular, Agile Toolchain
- Enables organizations to use products that best support each discipline
- Drives more value from each tool
- Easily add, replace and upgrade tools (ideal for mergers and acquisitions, and restructuring)
- Creates proactive environment for innovation
With this connected network, organizations can finally see, manage and optimize one of the most important processes in their business; the engine that drives their prosperity in a digital-centric world. You cannot underestimate how important it is to have this network, this complete system. This is the state where innovation thrives, and where continuous improvement can be executed. And, of course, where you gain the essential visibility to be confident you’re always building the right products to drive your business like a leader.Getting started – how to become a leader
Speak to us today about a complimentary one-hour consultation with one of our value stream experts to help you start visualizing, measuring and optimizing the value streams that exist in your business today.
The mad rush to deliver software faster is a major threat for an organization’s quality control and brand integrity. QA and test teams are under pressure like never before to ensure that software products are always functional, reliable and delivering value to end users. If it goes wrong, you can bet your bottom dollar that test managers and their team will be first in the firing line from up high.
The velocity and volume of work isn’t the only issue, either. It’s how the work flows. A software delivery value stream comprises multiple stages underpinned by a network of teams, tools and processes. All these touch points and routes that can disrupt and contaminate the flow and damage a product’s quality. Proper orchestration of this network and how work flows across the value stream is key to creating an effective end-to-end testing infrastructure.
As Matt Angerer, our pre-sales architect explains in his article for SD Times, more testers and automation isn’t the answer. Sure, test automation is a critical component to your overall testing strategy, along with having the right team of QA Analysts and Testers. But focusing on adding more testers to increase coverage, or automating just for the sake of automating can create unnecessary overheads in your value stream.
To remain lean, Agile, and adaptable — you need to closely examine and measure your data points. “The answer,” he writes, “is in the data.” Matt goes on to propose 12 KPIs to track that can help you unlock the full potential of your QA organization:
- Active defects
- Authored tests
- Automated tests
- Covered requirements
- Defects fixed per day
- Passed requirements
- Passed tests
- Rejected defects
- Reviewed requirements
- Severe defects
- Test instances executed
- Tests executed
By understanding the indicators of quality, you can better position your people, adjust your processes, and decide whether you have the right enabling technology in place to improve upon quality while accelerating velocity. Most organizations will make adjustments before closely examining and measuring these KPIs over the course of time. The key is to understand and document the trends that occur within teams, within projects, and within products. By understanding and documenting QA trends, a QA Leader is better able to pivot his/her team accordingly deliver in lock-step with the rest of the IT organization.
In many organizations, it’s up to the testing and QA teams to declare whether an application is ready to ship and deliver value to customer. In order to make that critical decision, they need real-time information from across the toolchain to access the health of a product. Value Stream Integration helps flow that critical information across tools to improve Quality Management. Check out the white paper below to learn more:Click on image to download.
Want a more personal touch? Request a highly-customized demo of how Tasktop can help you connect your end-to-end value stream to help you to measure, improve and optimize your enterprise software delivery.
The post 12 KPIs to help you improve the quality of your software delivery appeared first on Tasktop Blog.
One of the most popular topics of conversations I have with Nicole Bryan, Tasktop’s VP of Product, is the STEM gender divide and how to get more women into careers in tech.
Our Product team is 50 percent women, an unusually high number in the world of tech and one that we are incredibly proud of. Our other teams also strive for diversity and equality in their hiring practices, but this remains a challenge for our Software Engineering team.
According to the U.S. Government’s report, Women in STEM: 2017 Update:
- Women filled 47 percent of all U.S. jobs in 2015, but held only 24 percent of STEM jobs. Likewise, women constituted over 50 percent of all college-educated workers, but made up only 25 percent of college-educated STEM workers.
- While nearly as many women hold undergraduate degrees as men overall, they make up only about 30 percent of all STEM degree holders. Women make up a disproportionately low share of degree holders in all STEM fields, particularly engineering.
Women with STEM degrees are less likely than their male counterparts to work in a STEM occupation; they are more likely to work in education or healthcare. While women are equally capable of excelling in STEM fields, something is preventing them from pursuing this career path. Gender stereotypes absorbed from a young age teach women that careers focusing on communication, social skills, and the arts are a better fit for them.
When women see that their STEM classes are ~70 percent male, the ideas that they’ve absorbed form an early age (that STEM is not for them) are reaffirmed. If we want women to pursue careers in STEM, not only do we need to show women from a young age that STEM is a career that they can thrive in, but we also need to work on creating STEM communities and processes that are inviting and comfortable for women.
Gender equality is a pervasive problem in the North American tech sector, and one we are working on identifying concrete steps to solve at Tasktop. Some of the steps we’ve taken so far have included:
- Focusing on implementing role model ladders
- Founding and participating in Austin’s first ever Women in Product chapter
- Working with the Ann Richards School for Young Women Leaders to introduce young women in high school to careers in tech
Most recently, we capped off our second year of working with the Ann Richards School on their annual internship program, which gives female students in their Junior year of high school an opportunity to gain experience at a professional organization for one week. I wish I had had this opportunity when I was in high school!
Our hope is that by giving young women the chance to see what it’s like to work at a tech company at a young age, we can help open their eyes to professional paths they may not have even known existed. By seeing the inspiring women in our office and getting hands on experience in the field, they can see that a career in tech is something attainable and fulfilling. Each year, I’m blown away by the incredible work that our interns are capable of achieving in such a short period of time.In the words of our interns…
“It was a great experience to work with real professional women. It was different from what I had been learning in class as we hadn’t touched much on the basis of actual software, but a couple of engineers were very impressed that we had experience with Arduino! I learned a lot about what Tasktop does and is. Overall, I had a great time and I learned a lot. I’m grateful that I had the opportunity to do this.” – Brisi Duran, student at Ann Richards School
“Nicole welcomed us into her office and explained what she does and how she reached that point in her career. While she was talking, she emphasized the importance of diversity and how it helps a company to grow because you have different perspectives. This was music to my ears because this is something I have learned and grown from during my time at Ann Richards. At Tasktop, I was surprised to learn that the ratio of women to men on the Product team was 50:50. This was a stark contrast to my mom’s work environment, where she is almost always the only woman. And I never really thought anything of it either, since it’s always been like this. However, my time at Tasktop has taught me that if you don’t want that to be the norm, it doesn’t have to be. While reading the posts Nicole wrote, talking with her, and working under Rebecca Dobbin and Naomi Lure, I was fascinated by how Tasktop has cultivated, and continues to cultivate, a workplace that is so diverse.” – Sage O’Brien, student at Ann Richards School
If you would like to learn more about our work with women in tech, our internship/co-op programs, and our other work in the community, please drop us a line!
The post Bridging the STEM Gender Divide: One High Schooler at a Time appeared first on Tasktop Blog.
This week at DOES London 2018, we spoke about looking “Beyond CI/CD” to take greater control of your DevOps transformation to accelerate your enterprise software delivery. By implementing both Release Automation and Value Stream Integration, organizations, agencies and institutions obtain a potent force that enables them to connect, visualize, measure and optimize how they plan, build and deliver software at scale.
Application Release Automation (ARA) is crucial in helping larger organizations to ship software products faster in order to fight off the threat posed by truly Agile, digital-native competitors. Yet despite investments in ARA to eliminate the bottleneck between “dev” and “ops”, many are still finding that their end-to-end lead time and Time to Value (TtV) are still too long, unmeasured and unpredictable. That’s because there’s bottlenecks, latencies and dependencies further upstream.
What’s more, once the delivery bottleneck is solved thanks to ARA, organizations no longer know where their current bottlenecks are – nor how to find them. Why? Because ARA tools were not designed to provide visibility into work further upstream – it’s focused on all activities in the Release pipeline from from code commit to production. With no holistic view or end-to-end traceability, the software delivery process remains obscure and unmanageable.
That’s why leading IT organizations are investing in Value Stream Integration to go beyond CI/CD, which connects all key stages of software delivery – including release activities. Not only does it remove delay, reduce costly errors and ensure cross-team understanding, but it provides organizations with a fully visible and traceable flow of work that can be measured, managed and optimized to accelerate value delivery and enable true end-to-end DevOps.
Download our latest white paper Beyond CI/CD: why you need both Release Automation alongside Value Stream Integration to discover how combining both practices creates a potent force to help you accelerate your Time to Value (TtV).Click the above image to download e-book
Chat to us today to discuss how we can help you gain better insight into your software delivery value stream to accelerate all activities from ideation to operation for better software delivered faster.Further reading
The post Beyond CI/CD: why you need both Release Automation and Value Stream Integration appeared first on Tasktop Blog.
The theme for DevOps Enterprise Summit London 2018 was Get Together, Go Faster – and while that was definitely not my experience going through customs at Heathrow Airport, it rang true through many of the plenary and breakout sessions.
This year, Tasktop was a platinum sponsor, a book signing sponsor and had four speaking presentations at the conference. With 19 Tasktopians in attendance, it was an excellent opportunity to meet with DevOps champions, customers and partners from across EMEA.
After proper caffeination thanks to XebiaLabs on Monday morning, Chris Hill – Jaguar Land Rover Head of Systems Engineering – kicked off the presentations by discussing the culture of DevOps at the automotive company. He reflected that democracy isn’t always the right approach to software delivery and reiterated the importance of articulating the ‘why’ behind decisions, even if it’s sometimes difficult.
Later in the morning, Tasktop Director of Digital Transformation, Dominica DeGrandis, spoke about visualizing impacts to your workflow and metrics. The presentation addressed the constant challenges of conflicting priorities and how to best solve them by making your work visible.Dominica DeGrandis during her session “Conflicting Priorities: How to Visualize Impacts to your Workflow & Metrics”
Then Tasktop Director of Product Marketing, Naomi Lurie, and Senior Solutions Consultant, Laksh Ranganathan, talked through what enterprise IT can learn through a startup’s journey into Value Stream Management.
Naomi and Laksh spoke about Tasktop’s integration journey to connect our own end-to-end value stream including the challenges, lessons learned, and the metrics captured along the way. Like Marvel’s Ant Man, while Tasktop may be a smaller company, we’re solving the universal bottlenecks that larger IT organizations are facing, enabling us to help our customers to solve their own.Naomi Lurie and Laksh Ranganathan give their “Ant-Man Perspective”
During the presentations and breaks, I encouraged attendees to drop off their Tasktop attendee bag inserts – a lego brick – to the Tasktop booth. The lego bricks represented a £1 donation to one of two reputable charities in the realm of technology.
We selected ComputerAid and Girls In Tech as the two deserving non-profit organizations; the former dedicated to empowering the developing world by providing access, education, and implementing technology to developing countries, while the latter helps create a support framework to help women advance their careers in STEM. By the end of the conference, we had committed nearly £500.
On Monday afternoon, Tasktop VP of Product, Nicole Bryan, and former Nationwide Insurance DevOps Technology Director, Carmen DeArdo, spoke on the practical realties that large enterprises face when moving from a project management mindset to a product mindset. The session was so popular that it was standing room only and attendees we’re even getting turned away. Luckily, the session was recoded and will be shared with attendees soon (phew).
During Monday’s afternoon break, we decided to do something sweet for our technology partners who were also sponsoring the conference. By sweet, I mean we literally delivered dozens of delicious logo branded cupcakes to each partner booth. Tasktop would not be the organization we are today without the support of our extensive partner network and if that isn’t reason enough for a cupcake, I don’t know what is.One tasty looking toolchain
As the day came to a close, the early-release signing of Tasktop CEO Dr. Mik Kersten’s new book Project To Product: How to Survive and Thrive in the Age of Digital Disruption with the Flow Framework. Over twenty years of experience and research has been poured into this book and over 275 attendees walked away with the first copies. Are you one of the first? We’d love to hear your thoughts! Tweet to Mik using the handle @mik_kersten and the hashtag #ProjectToProduct. Only four months to go until the final copy is released at DevOps Enterprise Summit Las Vegas 2018!Mik is all smiles as he signs copies of his eagerly anticipated book ‘Project To Product: How to Survive and Thrive in the Age of Digital Disruption with the Flow Framework”
Wrapping out day two were sessions from Barclays on the shift to end to end value streams which include the PMO, dojos from Verizon Wireless, and a deep dive into Project To Product and “The Flow Framework” from Mik Kersten himself.Mik touches on the themes of his new book.
By the end of the conference I was questioning my choice of footwear, but I am also more certain than ever that it’s time to go beyond CI/CD by adopting an end-to-end mindset to software delivery. It’s only then that organisations will truly understand how to:
- Scale their DevOps transformations
- Improve their Time to Value (TtV)
- Visualize their flow to find the bottlenecks of sub-optimal performance
- Glean the right metrics for continuous improvement and faster delivery of business value.
We’ll be at DevOps Enterprise Summit Las Vegas this October and hope to see you there, in the meantime, we look forward to continuing the conversation.
We will also be publishing a new e-book this week – Beyond CI/CD: why you need Value Stream Integration alongside Release Automation – that provides a more in-depth look into how end-to-end flow to ensuring you maximize your investment in release automation and accelerate your software delivery.
Keep an eye on our blog and social channels for its publication!
The post Get Together, Go Faster: DevOps Enterprise Summit London 2018 Recap appeared first on Tasktop Blog.
Ask any CIO, quality assurance in enterprise software delivery is tricky business. After all, unlike the production of physical products, software is created via invisible knowledge work – work that travels through a complex network of activity between conception and delivery. It’s hard to comprehend something you can’t see, let alone test it.
As knowledge work, a piece of software is only as strong as the data (artifacts such as features, epics, stories and defects) behind it. And that data is only as strong as the means of communication that shares it. For many organizations, communication is one of the biggest threats to their quality assurance process.
Instead of an automated real-time flow of data across key stages in the software delivery value stream, the specialist teams who plan, build and deliver software are using manual handoffs (such as email, phone, IM, spreadsheets, duplicate entry etc.) to share and access product-critical information. Such archaic methods are slow and susceptible to human error and a huge danger to data integrity.
That’s why many leading organizations – including nearly half of the Fortune 100 – are automating this flow of product-critical information through Value Stream Integration. By connecting all the specialist tools in their value stream, these organizations are creating a single, traceable flow of work from end-to-end. In doing so, all information that pertains to a product’s development is traced back to original requirement and its evolution in real time – enabling better test coverage and quality control to ensure a customer’s ever-changing needs are met.
In his latest article, Matt Angerer – a pre-sales architect at Tasktop – provides seven key reasons on why Value Stream Integration is so integral to quality assurance:
- Higher Awareness of the QA Function to Improve Software Quality and Delivery Velocity – Value Stream Integration surfaces the good, the bad, and the ugly as requirements are conceptualized, designed, and documented. QA is no longer an “afterthought” (test what we can when we can), it’s a fully integrated function of the SDLC. QA now has a seat at the “adult table” when organization’s embrace the concept of value stream integration end-to-end.
- Dramatic Improvements to your Defect Detection Effectiveness (DDE) – Bridging ITSM with ADM, calculating Defect Detection Percentage (DDP) on-the-fly – It’s not just about Code Commit to Release Time, DDP helps organizations measure how effective their regression testing is at trapping bugs before release. Value Stream Integration bridges the gap between ADM and ITSM.
- More Effective Change Impact Analysis, Control, and Management – losing the feedback loop to Fast Change Requirements. How many times have we seen organizations using Microsoft SharePoint lists to track Change Requests (CRs), separate from the tool they are using to develop test cases for Unit, System, Integration, Regression, and UAT? Disaster looms if you can’t associate the artifacts.
- Improved Test Coverage with Real-Time Feedback Loops – Infusing cross-platform alerting capability for Work Artifacts is central to driving Software Quality Assurance. Testing must always mirror requirements and one should always question the validity of a test case without an associated requirement to cover. Let me explain what cross-platform alerting is and how value stream integration drives a higher level of awareness and quality assurance
- “Shifting Left” to Reduce Costs and Improve Team Morale – involving QA very early in the SDLC – eliminate the “throw it over the fence” mentality and root out defects very early. Shifting left is all the buzz in the industry when it comes to improving software quality. How do we implement this concept though?
- Elimination of the “Ping Pong Effect” – Developer and Tester Alignment can be tightened across tools with a focus on Value Stream Integration. Bug fix time improves, and test coverage improves as QA Analysts aren’t explaining each step they took in the software under test to uncover a bug.
- Accelerated Buildouts of Global Testing Centers of Excellence – Building a Global Testing Center of Excellence (TCoE) does not require a unified tool as a single source of record for all working artifacts (releases, requirements, tests, defects, reports). One size does not fit all. Establishing a model of communication within your TCoE for all tributaries to converge into one river produces better results than tool consolidation. You can thrive with a multi-tool strategy across your lines of businesses. Let me explain why and how.
Inspired by Matt’s trials and tribulations as a Program Testing Consultant, we will be hosting a webinar on Wednesday 27th June (9am Pacific, 12pm Eastern) to discuss how Value Stream Integration can revolutionize how you deliver and protect software quality for lasting results. Key takeaways from this webinar include:
- How to elevate the “QA Brand” in your organization with Value Stream Integration
- How to improve your Defect Detection Effectiveness (DDE) by understanding 2 key principles
- Techniques to drive effective change impact analysis, control, and management of changes
- How to improve automated test coverage with real-time feedback loops
- Why “swimming upstream” creates high quality software
- How to eliminate the “Ping Pong Effect” and achieve lasting developer & tester alignment
- Debunking the “One-Size-Fits-All” trend to platforms, governance, and Testing Centers of Excellence
Want a more personal touch? Request a highly-customized demo of how Tasktop can help you connect your end-to-end value stream to help you to measure, improve and optimize your enterprise software delivery.
The post Seven Reasons Why Value Stream Integration Improve Software Quality Assurance appeared first on Tasktop Blog.
“DevOps Enterprise Summit is the epicentre of learning for leaders and technologists wanting to see their digital transformations through to success. No other event I know goes as far in combining technology foundations with business results.” – Dr. Mik Kersten, CEO and co-founder, Tasktop
At DevOps Enterprise Summit London 2018 (June 25-26, Intercontinental – The 02, London, UK), Tasktop (Booth 44) will explain why it’s time to go beyond CI/CD by adopting an end-to-end mindset to software delivery. By looking “outside” of DevOps/release stages, Tasktop will help organisations to:
- Scale their DevOps transformations
- Accelerate their software delivery
- Greatly improve their Time to Value (TtV)
- Visualize flow to find bottlenecks/root causes of sub-optimal performance
- Glean flow metrics for continuous improvement
In London, United Kingdom – the world’s leading technology hub – Tasktop will engage with the DOES 18 community with a clear yet powerful message: “You won’t succeed by just focusing on “code commit to deploy” – you must think end-to-end if you want to accelerate the business value of your software delivery.”
With an expected attendance of nearly 800, DOES London will be jam-packed with two days of informative keynote and breakout sessions, networking, and a buzzing expo hall. Needless to say, we’re planning on making a pretty big splash in the big smoke…
The greatest minds
As well exhibiting and sponsoring DOES 2018, Tasktop will be participating in the speaker programme alongside the likes of Gene Kim, Nicole Forsgren, John Willis and other leading lights of the DevOps movement.
Taking Tasktop’s “think end-to-end” message to the stage, attendees will hear compelling speaking sessions from the likes of Dr. Mik Kersten (CEO and co-founder, Tasktop), Dominica DeGrandis (Director of Digital Transformation, Tasktop), Nicole Bryan (VP of Product Management, Tasktop) and Naomi Lurie (Product Marketing Director, Tasktop), as well as Carmen DeArdo from Nationwide Insurance, a visionary Tasktop customer.
Monday, June 25 – 11:15 AM – 11:45 AM – Breakout B
Conflicting Priorities: How to Visualize Impacts to your Workflow & Metrics
Dominica DeGrandis, Tasktop
Complex problems that prevent your team from delivering business value predictably can be improved by visualizing the impacts of conflicting priorities–in both your workflow and your metrics. In this talk, DeGrandis shows you how to improve your DevOps implementation and influence others by using the power of visualization.
Monday, June 25 – 11:55 AM – 12:25 PM – Breakout D
The Ant-Man Perspective: What Enterprise IT Can Learn from a Startup’s Journey in Value Stream Management
Naomi Lurie and Laksh Ranganathan, Tasktop
This session will share Tasktop’s experience in laying down the infrastructure for value stream management. They’ll talk about the process, the hurdles they faced, their biggest wins two years in, the metrics they’ve captured, and what they still need to do. Lastly, they’ll share their thoughts about how this experience would play out at a much larger organization.
Monday, June 25 – 3:00 PM – 3:30 PM – Breakout B
Project to Product: Practical Realities at a Large-Scale Enterprise
Nicole Bryan, Tasktop & Carmen DeArdo, Formerly Nationwide Insurance
Moving to a product-based value stream for large enterprises is daunting. Nationwide Insurance is experimenting with user experience-based value streams as part of moving to a product approach. Hear what has worked, what hasn’t, and practical considerations to designing large scale product value streams.
Tuesday, June 26 – 1:55 PM – 2:25 PM – Breakout B
Project to Product: How Value Stream Networks Will Transform IT and Business
Mik Kersten, Tasktop
Projects, organization charts, and enterprise architecture are the best representations of value creation we have today. They are failing us, and we know it. To survive digital transformations, enterprises need a new approach for connecting the business to the software delivery pipeline.
When you enter the boisterous expo hall, the Tasktop booth is on your immediate left (P44). We’ll have company information, product collateral, and various #swag for you to always remember us by. Additionally, Tasktop team members will be available to answer any questions you may have about Tasktop and how we help enable DevOps success with Value Stream Management.
Find your bottleneck – one to one meetings
Not sure when you can swing by the booth to learn more about Tasktop or have your technical questions answered? No problem. Schedule an on-site meeting with the Tasktop team before you arrive at a time that works best for you.
If you have any other questions about Tasktop at DevOps Enterprise Summit London, I am happy to answer them. Just leave a comment or email me at firstname.lastname@example.org.
We hope to see you in London!
The post Tasktop at DevOps Enterprise Summit London 2018: Time To Think End-To-End appeared first on Tasktop Blog.
We’ve all seen those projects that start with fast builds. Fast builds that build everything and run all of the tests are a great way to get feedback on your code review, whether it be Travis commenting on your GitHub Pull Request or Jenkins voting on your Gerrit code review. It’s almost inevitable that successful projects end up with growing build times, often to the point where the feedback loop on our changes is out of control. This post is about solving that problem.Traditional Approach
I’ve seen builds go from a few minutes, to 8 minutes, to 12, 15, and before you know it builds are taking more than 40 minutes. Along the way we convince ourselves that it’s OK, until eventually build times are out of control and review feedback cycle times are out the window. Keeping build times to under 20 minutes is key to any kind of efficiency, with sub-10 minute builds being the sweet spot.
Fixing the problem becomes a monumental task as the complexity of the project grows, and prioritizing that work always seems to take a back seat to more urgent feature delivery. To make it worse, as the team grows and one team becomes two or more, the responsibility of fixing the issue is not squarely within a single team and so the problem continues to go unsolved.
Our initial cut at reducing build times took some of the usual tacks:
- parallel builds with better modularity
- faster build machines
- running fewer concurrent builds so that build machines have a lighter load
- profiling/optimizing long-running integration tests
These all made a difference, but we eventually reached the limit of what we could achieve with these approaches and still, build times were reaching at best 35 minutes and in excess of 50 minutes on a bad day.A New Take
If we can’t make our tests any faster, and we can’t make the build machines run those tests any faster, what if we just ran fewer tests? Common sense tells us that we need to run all the tests – but what if we don’t need to?
With the premise that everything on master has already passed all of the tests, then it follows that we really only need to run tests for the code that’s affected by the last change, that is, the change in our code review.
By taking the intersection of the last commit (i.e. the code review) with the transitive Maven POM dependencies and hierarchy, we can dynamically determine which projects should run tests. This is relatively trivial by using the Maven Plugin APIs and JGit (for Git repositories).
To run this experiment, I created the maven-change-impact Maven plug-in. Details are on the project page on GitHub, but to summarize: the maven-change-impact plug-in sets a Maven property based on analysis of the Git change and the POMs. This property can then be used to enable or skip execution of tests using the Maven Surefire Plugin on a per-project basis.Results
We’ve been running with this experiment now for two weeks and we’ve seen impressive results.
On average, our builds have come down to 12 minutes. Some builds are running as fast as 4 or 5 minutes, while others are in the 18-20 minute range.This is a massive improvement over our previous average which was in the range of 40-50 minutes.Side-Effects
With faster builds, we have the initial benefit of faster feedback cycle time which translates to keeping developers in the flow.
But, there are other benefits too:
With faster builds, we’ve seen an increase in the number of review verification builds that we’re running. In other words, we have better throughput. This translates to being able to move more changes through our value stream, faster. In other words, more value to the customer and improved ability to react to change.
Developers are no longer incentivized to create large reviews. More but smaller changes reduces the cognitive overhead in code reviews, and makes for better separation of unrelated changes. So developers are able to collaborate more effectively, and it’s easier to understand the scope of any given change.Applicability
This approach will benefit teams the most in cases where build times are long primarily due to test execution times, and you have reasonably good modularity in your codebase.
For example, if you want to run integration tests in addition to unit tests to verify code reviews and your integration tests take more than a few seconds each, test execution times can really add up. This definitely applies in our case, where we have more than 30,000 tests.Try It Out
I invite you to try out the maven-change-impact to see if it makes a difference for you and your team. Enjoy, and don’t hesitate to let me know if it works out for you!
this article was originally published at greensopinion.com
Improve your defect reporting and resolution process and MTTR by integrating Micro Focus ALM, Jira and ServiceNow
The importance of developer-tester collaboration is one of the most common integration patterns for a reason. It makes sense that you’d want the specialists who write the code to work in harmony with the specialists who test the code to optimize the defect reporting and resolution process by:
- reducing defects
- improving Mean Time To Resolution (MTTR)
- enhancing product quality
- obtaining end-to-end visibility and traceability
That’s why many of our customers connect their Agile planning tools (such as Jira) with their test management tools (such as Micro Focus ALM (formerly HPE ALM)). By flowing defects automatically between the two tools, they remove the need for time-consuming, error-prone manual handoffs to help optimize the speed and accuracy of their tests.
Yet there’s another vital step in the defect resolution process that is often overlooked – the customer. Product issues in the field generally originate from the end user and are flagged to the business via your IT service teams, who are work in ITSM tools such as ServiceNow.
Naturally, there’s a lot of focus on the developer-tester dynamic within the first round of production to reduce the chances of going live with too many bugs. But in the Agile “get urgent features in production and in the customer feedback loop ASAP” world, there’s always going to be issues when applications go live. It’s nature of the beast.
If anything, logging and swiftly responding to customer feedback is just as important as it both ensures minimal product downtime and demonstrates a commitment to the customer to ensure their product is of high quality and delivering the value expected. That’s why connecting your service teams to the development and test/QA teams is a crucial step in improving your defect reporting and resolution process and optimizing your software delivery value stream.
Value Stream Integration creates a visible, traceable record of all work activities that help you glean useful metrics around how the long process is taking, how long work sat in a work station queue, how long it was worked and so on – all information that can be measured for process optimization.
In the below video, you can how easy and valuable connecting Jira, Micro Focus ALM and ServiceNow can be to your software delivery performance:Value Stream Integration and Quality Management
In many organizations, it’s up to the testing and QA teams to declare whether an application is ready to ship and deliver value to customer. In order to make that critical decision, they need real-time information from across the toolchain to access the health of a product. Value Stream Integration helps flow that critical information across tools to improve Quality Management. Check out the white paper below to learn more:Click on image to download. Further improve your product quality by integrating Micro Focus ALM and Tricentis Tosca
The majority of organizations must contend with the daily challenge of managing a complicated suite of automated and manual tests. They know that both forms of software testing offer their own advantages and disadvantages, and that by integrating the two – through tools such as Tricentis Tosca and Micro Focus ALM (formerly HPE ALM) – they can improve the overall speed and quality of their enterprise software products. Read on.
Want a more personal touch? Request a highly-customized demo of how Tasktop can help you connect your end-to-end value stream to help you to measure, improve and optimize your enterprise software delivery.
I have been fortunate enough to work on a variety of teams and companies in the software industry, each with different methodologies and varying efficiency. Throughout my career as a software developer, I’ve thoroughly enjoyed the problem-solving aspects of working in a development team, and would like to share a couple of the challenges that my former colleagues and I overcame.
One of my earliest was as a junior engineer in a large company on an Agile team. Inadequate infrastructure and un-scalable testing architecture harshly hindered attempts at becoming more Agile. One the team goals was to reduce the defect backlog. The development team would temporarily alter its defect to feature ratio, and the QA team would be loaned some volunteer workers from other teams so it could test more regularly within sprints.
At least a full day of testing by three engineers was required for every sprint as Continuous Integration with automated builds and testing had not yet been set up. Most of the testing was far from being automated and required availability of limited specialized hardware. Even when it was automated, it often meant taking away a specialized device from manual testing inventory.
Consequently, the development and test cycle became longer. Over reliance on manual testing and non-inspection of critical infrastructure initially led to a waste in man power. Several sprints later, these issues were overcome with a harsh review of the goal priorities. Continuous Integration was prioritized and the specialized devices shortage was addressed in the budget. So how can we prevent problems like these arising? By centering efforts on scalable work flows earlier on, and rising above the “good enough for now” mindset. By the time I had left, the company was well on the way to being truly Agile.
In another company as a junior engineer on a much smaller team, I encountered a different set of problems, chiefly poor cross team communication and a high defect rate. One of the reasons for this was because like many time-pressed and under-pressure developers, we took shortcuts in the pursuit of reducing overhead. Features that could take weeks to develop would have no paper trail or accountable visibility of progress until a code review in its final day.
At the time, I convinced myself that these shortcuts were allowable in a small team. I would update my team lead of my progress directly using IM. Relying on long streams of chat history and people’s memory seemed perfectly fine. At the time, I was strongly in favor of over the shoulder code reviews for anything less than 50 lines of code.
It took us a while to swallow our pride and realize that these shortcuts were more likely to cause problems than reduce any meaningful work. I now cringe when I think of issues that will be encountered with that tweaked code that has little to no context in its revision history. The thought of a Jira issue marked as “done” but devoid of any detail except for its brief summary now makes me uneasy.
Traceability of work is important because it is not a case of if one forgets, but a case of when one forgets. Today I am very glad to part of a team that takes these work processes seriously. By having a defined and up to date team process page with checklists for everything from tech debt creation to epic verification steps, developers are much less likely to be tripped up by mistakes from the past.
So how about you? This is a topic on which everyone has something to add. No doubt there are plenty more insights that could be added to this list!
The post Learning on the job – the never-ending education as a software developer appeared first on Tasktop Blog.
The majority of organizations must contend with the daily challenge of managing a complicated suite of automated and manual tests. They know that both forms of software testing offer their own advantages and disadvantages, and that by integrating the two – through tools such as Tricentis Tosca and Micro Focus ALM (formerly HPE ALM) – they can improve the overall speed and quality of their enterprise software products.
Manual testing, for instance, plays a crucial role with regards to inspecting and improving the user experience of the software. Expert testers can draw on their skills, expertise and experience to manage tests that are unplanned/ad-hoc, urgent, complex or light on documentation. As an end-user themselves, testers know what to look for in terms of usability and functionality. In that respect the human brain is just as important as any piece of code in software delivery.
Automated testing, meanwhile, makes up for the shortcomings of manual testing. In a world of frequent code changes and time-consuming repetitive tasks, automation can pick up the slack – especially when it comes to managing large mission-critical product portfolios and serving huge userbases. Work of that magnitude is too much for the human brain and touch. Automation eliminates a lot of this time-consuming, non-value adding work, freeing up QA teams to focus on more specialized quality assurance activities such as:
- Requirements Ambiguity Check
- Developer-Tester Static Code Inspections
- QA Metric Design
- Automated API Test Design
Many of our customers use both manual testing tools (such as Micro Focus ALM) and automated testing tools (such as Tricentis Tosca). By integrating both of these tools, the quality and breadth of their test coverage is amplified dramatically. All test-related information can be automatically flowed between the two, eradicating manual tasks of communication (such as email, status meetings and duplicate entry) to speed up the process and reduce the ever-looming threat of human error.
In the below video, you can see how integrating the two tools can improve product quality, collaboration and visibility:
Key Tasktop features and benefits
- Share requirements with test automation engineers early on, so they have plenty of time to design automated tests and ensure adequate test coverage.
- Centralize data about defects logged in multiple in one tool (acting as master) for the sake of easy reporting.
- Product owners, business analysts and testing teams can collaborate directly from their tool, never having to copy and paste information between tools.
- Teams can exchange comments directly on the artifact inside their own tool, maximizing productivity and creating a completely traceable thread of communication.
In many organizations, it’s up to the testing and QA teams to declare whether an application is ready to ship and deliver value to customer. In order to make that critical decision, they need real-time information from across the toolchain to access the health of a product. Value Stream Integration helps flow that critical information across tools to improve Quality Management. Check out the white paper below to learn more:Click on image to download white paper Step by step – integrate your whole Software Delivery Value Stream
Integrating testing tools is only one step in improving the quality and delivery speed of software products. Organizations are also integrating development tools to testing tools to help them spot and remediate defects even faster, as well as integrating development and test tools with all other key stages of the software delivery value stream. You can read more on integrating Agile Planning tools (such as Jira) with Test Management tools (such as Tricentis Tosca) in the below blog:
Connecting an entire value stream can at first seem like an overwhelming task. But most leading organizations are connecting their value streams in an incremental way. They start by implementing one or two ‘integration patterns’ – such as Micro Focus and Tosca with Jira – that allow them to connect parts of their value stream. Over time they add more and more, with the ultimate goal of a fully integrated value stream.
You can learn more about Value Stream Integration and how to manage your value stream in the below e-book:Click image to download e-book
The post Complete Test Management: Integrate Tricentis Tosca and Micro Focus ALM for the best of both worlds appeared first on Tasktop Blog.
Integrate Atlassian Jira and Tricentis Tosca to improve the quality and traceability of your software products
We recently analyzed the value streams of 300+ leading U.S. enterprises across multiple industries to better understand how organizations are using Value Stream Integration to improve their software delivery process.
The research unearthed a number of fascinating insights, including the fact that 66 percent of enterprises are using 4-8 integration patterns, with the tester-developer alignment being the most popular (used by 70 percent). The popularity of this pattern highlights the ongoing challenge of improving the quality of software products at an enterprise-scale.
This challenge can be overcome by integrating development and test management tools such as Atlassian Jira (an Agile Planning tool) and Tricentis Tosca (a Test Management tool). Given that the alignment between developers and testers is such as critical issue within the software delivery process, it’s not surprising that this pattern is one of the most common that we see.
Integration helps our customers to identify and remediate defects faster for better product quality and to ensure all core requirements are covered by test cases to meet compliance obligations. In the below video, you learn how Tasktop can automatically flow defects between Jira, Tosca and all other tools used for planning, building and delivering software to eliminate manual tasks such as duplicate entry, status meetings and email.
Key benefits of integrating Atlassian Jira and Tricentis Tosca
- Synchronizes artifacts from end-to-end across the software delivery value stream, allowing information to flow freely between Tosca, Jira, Micro FocusALM, Polarian and many others
- Improves team collaboration by connecting Tosca to third party tools and allow artifacts to be synchronized across the lifecycle.
- Supports cross-tool traceability and reporting, particularly between requirements and defects – removing the need for manual processes and spreadsheets
- Synchronizes requirements from Requirement Management tools or user stories from Agile Planning tools to Tosca requirements
- Synchronizes failed tests from Tosca to defects in Agile tools
- Allows for automatic reporting of defects found during test execution from QA to development
Connecting an entire value stream can at first seem like an overwhelming task. But most leading organizations are connecting their value streams in an incremental way. Most organizations start by implementing one or two ‘integration patterns’ – such as Jira and Tosca – that allow them to connect part of their value stream. Over time they add more and more, with the ultimate goal of a fully integrated value stream.
You can learn more about Value Stream Integration and how to manage your value stream in the below e-book:
Latest insights into enterprise software delivery, scaling Agile and DevOps, and Value Stream Management
With enterprise software delivery being a rather complex endeavour – with many specialized roles, a plethora of tools and rapidly evolving best practices – we’re inundated with questions on a daily basis from organizations looking to make sense of their software delivery.
Below is a list of just some of those frequently asked questions, with links to resources that will help you answer your questions and better understand how to improve the way you plan, build and deliver software at scale.
This information will provide you with key insights and tips to support your transformational journey, no matter what stage you’re at. And if you still have questions, you know who to come to…Software delivery
What are the latest industry trends?
As our co-founder Robert Elves says, predicting the future of the enterprise software delivery can often feel like hammering Jell-O to a wall. That’s not to say it’s impossible – check out our blog 10 predictions and trends for enterprise software development and delivery in 2018.Integration
Why do we need enterprise toolchain integration?
Most people understand that they need tool integration in some form or another as the benefits are obvious and logical, especially at a team-level. For instance, improving quality and speed of tests by connecting developer tools (such as Jira) and a testing tools (such as Tricentis) to automatically flow defects in real-time for faster resolution. But as we explain in this white paper, integration is much bigger than that – it’s a key differentiator in a digital world.
Do we really need integration if we’re going to streamline and consolidate our software delivery process into one tool, such as Jira?
Having ‘one tool to rule them all’ is naturally appealing in terms of perceived simplicity and cost benefits. The idea, however, is inherently flawed. The truth is, you can’t migrate all the specialist stages of enterprise software into one tool. This white paper debunks the ‘one tool fallacy’ and explains why an integrated best-of-breed strategy is crucial to optimizing your software delivery at scale. In addition, this article explores the complex role that Jira plays in your value stream.
We have a million and one business initiatives to implement, so integration can wait, right?
Sure, you can wait – but your competitors won’t. And the longer you wait, the further you’ll fall behind. This white paper provides six key reasons why you need to value stream integration today.
We’ve already invested in DevOps and Release Automation tools, do we really need toolchain integration as well?
Yes, if you really want to accelerate delivery and Time to Value. In her article Why you need enterprise toolchain integration alongside release automation, our Product Marketing director Naomi Lurie explains the fundamental differences between the two, and how integration actually optimizes release automation and other DevOps capabilities.
Everything we need to do is heavily regulated and must be compliant – does integration help with traceability?
It sure does. In fact, end-to-end integration is the only way you can trace and keep track of the complex network of activity that underpins your software delivery. Our Knowledge Architect Manager, Brian Ashcraft, artfully explains why integration is so crucial to traceability in his article A new approach to traceability can solve product validation issues.
How are our competitors using value stream integration?
We analyzed over 300 software delivery organizations, many of whom are the Fortune 100, to see how they’re using integration to accelerate delivery, the results of which are discussed in this webinar.Agile and DevOps
Why are our Agile and DevOps transformations failing at scale?
If we had a penny…! One of the most frequent questions that we’re asked. You’ve invested in Agile and DevOps, tooling and people, yet your Time to value is still too long, unpredictable and unmeasurable. Our e-book highlights some the major reasons you’re struggling to see benefits at an enterprise-level.
How do I measure DevOps bottlenecks?
Another major challenge for organizations. Wasn’t DevOps meant to solve the bottleneck between code commit and deploy to accelerate delivery? Well, it has all but eliminated that bottleneck – but that’s only one stage of the software delivery process. What about everything that happens before and after? As our Director of Digital Transformation, Dominica DeGrandis, writes in a recent article, you need full end-to-end visibility from ideation to production to identify new bottlenecks (and optimization opportunities) in your value stream.
What sort of metrics should we use for measuring DevOps/software delivery?
Delivering value to the business through software requires a network of teams, disciplines, tools and processes. Managing and improving this process and systems requires real-time insight and measurement. In this article, our founder and CEO, Dr. Mik Kersten, and DevOps expert Dr. Nicole Forsgren, discuss measurement approaches through system and survey data.Value Stream
How is software delivery a “value stream”?
The short answer is, how isn’t it? Software’s only purpose is to create value for the end user, and as such, organizations need to better understand how value flows from customer request to working software in production. Another excellent article by Dominica – Rowing in the same direction: use value streams to align work – explores what value streams are, why they matter, and how to exploit them for faster, better software delivery.
What is Value Stream Management?
Once people realize that their software delivery process comprises multiple value streams, the next question is: how on earth do I manage them? The answer is Value Stream Management (VSM). Everything you need know about VSM, and toolchain integration’s role within it, can be found in this e-book. What’s more, a new report from Forrester Research, Inc.– Elevate Agile-Plus-DevOps with Value Stream Management by Christopher Condo and Diego LoGiudice – has concluded that the “time is right” for Value Stream Management in enterprise software delivery. Our webinar with Forrester exploresthe report’s findings.
How is software delivery a “network”?
A common misconception is that technology underpins a software delivery value stream. That is only half-true; technology is merely an enabler for better work. As our VP of Product, Nicole Bryan, explains in her article The human touch – the key to optimize your software value stream network, it’s actually people who drive value creation. And they collaborate through a social network of communication that relies on the real-time flow of product-critical information between disparate tools.
Still have questions? The below content may help:
- Webinar: Discovering dark debt in your culture
- Webinar: The evolution of Agile Portfolio Management for Scaled Agile Success
- Webinar: Gene Kim and Mik Kersten discuss the DevOps movement in 2018
- Article: Requirements management and software integration: why one can’t live without the other
- Article: What IT can learn from The Beatles’ breakup
- Article: Optimizing feature request and implementation by integrating Salesforce, Targetprocess and Jira
- Article: How to accommodate different processes in enterprise software delivery
And don’t forget to subscribe to our YouTube channel, where you can find demo videos for a host of tool integrations that will help you drive efficiencies across your software delivery value stream and business, including:
You can also find more educational content in our comprehensive resource library.
The post Latest insights into enterprise software delivery, scaling Agile and DevOps, and Value Stream Management appeared first on Tasktop Blog.
I am stoic believer in the “12th man” – the notion that fans of an eleven-a-side sports team can tip the scales in their team’s favour with fervent support. That when the players’ legs are heavy, and their backs are against the wall, that our unwavering chanting, cheering and drumming can create a cauldron of noise that galvanizes and inspires the team to victory.
Take West Ham, my English football (“soccer”) team, for example. It’s May 2016, our last ever game at the Boleyn Ground, our historic home for over a century. We want to say an emotional goodbye to our beloved stadium with an iconic victory over Manchester United, one of the biggest and richest clubs in the world.
Despite going 1-0 up after ten minutes, we found ourselves 2-1 down with 18 minutes to go. It could’ve been so easy, and even understandable, for the players to crumble under the pressure, to melt under the magnitude of the occasion. The players’ heads could’ve dropped, and the fans could’ve fallen into a grieving silence. But we didn’t; our response was to roar the team back to life.
Just four minutes later, we’re level again. And four minutes after that equalizing goal, we’re in the lead…3-2 to the Hammers! At this point, all the home fans and players are one. As we defended for our lives, every tackle and clearance was celebrated as if it were a goal.
Hell, it felt like we – the fans – were making those last ditch sliding tackles, those goal line clearances. That we were on the field, on that oh so beautiful hallowed turf, spurring our heroes on. Side by side, fans and players working together to ensure the Boleyn Ground’s legacy would forever be associated with victory.
You can see goose bump-inducing highlights below:
Yes, the 12th man is very real. Yet that year, the 12th man only got us to 7th in the league. Leicester City, on the other hand, had something even more special. Sure, the Foxes’ have a fantastic fan base that certainly played a big role in their historic 2015/2016 season, but they also possessed the 13th man; sports performance analytics. And arguably it was this extra “player” that gave them the edge in their famous Premier League title win that season.
In her latest article How sports performance analytics can help technology organizations win the game, Dominica DeGrandis digs into how Leicester defied 5000-1 odds through clever use of performance analytics, and explains what technology organizations can learn from the story to turn the odds in their favour.
If you want to know more about how Tasktop can help you collect real-time data across your software delivery value to glean powerful insights to improve your IT performance, speak to one of our experts today.
The post The 13th Man – what tech organizations can learn from sports performance analytics appeared first on Tasktop Blog.
“Today the game is being played much more near mid-field – that’s where customers are. It is a false distinction to say it’s retail vs. online because that’s not the way customers think. They want this seamless, integrated experience across all channels. Customers don’t think about channels. They just want to buy the product.” – Terry Lundgren, former executive chairman and CEO, Macy’s
When Terry Lundgren speaks, you listen. The recently retired executive chairman and CEO of Macy’s has been on the frontline of the evolving retail industry for over a quarter of a century. There are few people with a better understanding of retail survival strategies.
The Californian successfully spearheaded the preservation and prosperity of one of the oldest retail chains in the U.S. through a gauntlet of challenges, such as the 2008 recession, the decline of the high street, the meteoric rise of e-commerce, and a relentless spawning of new digital-native competitors.
The proof is in the pudding; in 2017, Macy’s was ranked the 4th top U.S. retailer in online sales, generating over $4.6 billion via e-commerce channels (behind Amazon, Apple and Walmart). Macy’s success is not surprising – the company realized early on that the future of retail is not about online or offline, it’s about delivering a product when a customer wants it. It’s about a 24/7, 360° shopping experience.
Software plays a vital role in providing that seamless experience. It’s why Macy’s took measures to improve and optimize its software delivery by integrating the teams that plan, build and deliver their applications. You can read more on how Macy’s used tool integration to save up to 600 hours in manual work in just six months in the below case study:
Traditional retail organizations are urgently looking to improve their software delivery to keep up with e-commerce juggernauts such as Macy’s, Amazon, Ebay, Shopify, Walmart et al. They know that software is what gives them their competitive edge – especially when it comes to supporting internal systems that underpin the complicated mechanics of an omnichannel approach that serves a global customer base.
To improve the delivery speed, quality, reliability and flexibility of their software applications, they’re reevaluating how they deliver software an enterprise-level. They’re doing this by connecting their software delivery value streams.
Download our new e-book on the retail sector to learn more:Click to download
The post New e-book: how retailers are using software to deliver better omnichannel experiences appeared first on Tasktop Blog.
“At least 40 percent of all businesses will die in the next 10 years…if they don’t figure out how to change their entire company to accommodate new technologies” – John Chambers, former CEO of CISCO.
There was a reason that IDC predicted that by the beginning of 2018, two-thirds of CEOs of Global 2000 companies would have digital transformation at the center of their corporate strategy. It’s all about innovation and survival.
Digital transformation, in a nutshell, focuses on leveraging current technologies to adapt business strategies, products and services to perpetually evolving customer needs. To that end, the role of IT – and the CIO in particular – is now centre stage, under the spotlight, and the pressure is on.
For a digital transformation to be successful, organizations must work faster to align the business with IT. Fortunately, research suggests that the tide is turning in that respect. The 2018 State of the CIO survey found that almost three-quarters of respondents said that IT and lines of business (LOB) are engaging more frequently in collaborative projects where there is shared oversight.
At long last, IT appears to be shrugging off its techy “functional” tag and the business folk are taking note – 49 percent of LOB respondents consider IT a strategic advisor for proactively identifying new opportunities and for making technology and provider recommendations. Moreover, 88 percent of IT leaders and 64 percent of LOB see the CIO role becoming more digital- and innovation-focused.
From a CIO’s point of view, the majority see their role as more transformational (36 percent) or strategic (45 percent) rather than purely functional (32 percent). Not that this change in perception makes their job any easier. A study by IDG – the CIO’s 17th Annual State of the CIO – found that the majority of CIOs are finding it increasingly hard to balance innovation and operational excellence.
Considering what’s at stake, few could blame CIOs for heading down to Costco and bulk buying Halcion. There is a better option, though, and that’s Value Stream Management – a holistic approach to taking control of an organization’s software delivery value stream to help deliver better business outcomes through IT.
Download our latest e-book by clicking on the image below:
Inside this e-book, you will find:
- An overview of a CIO’s pain points in the modern business landscape
- A definition of value streams and how they apply to software delivery
- An Introduction to Value Stream Management and how it addresses the core issues that organizations are facing with their software delivery at scale
- An outline of the three practices that underpin Value Stream Management and how Value Stream Integration supports those practices
- A guide to the first steps in connecting the value stream and an overview of integration patterns
Want to know more? Speak to us today about a free one-hour consultation with one of value stream experts to begin visualizing the value streams that exist your business.Further reading
- What is Value Stream Management in enterprise software delivery?
- The Time Is Now: new report highlights how Value Stream Management tools can “inform product plans and priorities and point to ways to further optimize software delivery and quality”
- Mind the gap: bridging the divide between the business and Agile/DevOps teams with Value Stream Management (Webinar)
- Value streams in software delivery
- Common integration patterns in enterprise software delivery
- Traceability in software delivery value streams
The post New e-book: Value Stream Management – the key to balancing innovation with operational excellence appeared first on Tasktop Blog.
The Time Is Now: new report highlights how Value Stream Management tools can “inform product plans and priorities and point to ways to further optimize software delivery and quality”
“Value stream management provides greater transparency, measurement, and control of the software delivery pipeline.” – Elevate Agile-Plus-DevOps with Value Stream Management, Forrester Research, Inc., April 20, 2018
A new report from Forrester Research, Inc.– Elevate Agile-Plus-DevOps with Value Stream Management by Christopher Condo and Diego LoGiudice– has concluded that the “time is right” for Value Stream Management in enterprise software delivery.
In our view, the report explains how Value Stream Management enables companies to create real value for their business, leveraging existing investments in Agile and DevOps. CIOs and CTOs can finally align with business needs to create IT strategies that drive the business forward instead of just focusing on IT delivery and cost control.
Acknowledging that organizations have worked hard to “implement the practices and tools of modern software development,” we believe that the report highlights even the most advanced Agile and DevOps programs still suffer from a number of visibility issues, difficulty managing and optimizing delivery pipelines and confusion over what to measure if they want to enable continuous improvement. The report includes the perspective of end-users to provide some specific examples of the issues being faced:
“We need to answer questions like ‘Where are the constraints?’ ‘Do we add more people?’ or ‘Do we do something else?’ We wanted to create a model of the value stream, model the artifacts flowing metrics and metrics coming from that, and take action. That is now how we are moving forward.”
We believe that Forrester identifies Value Stream Management as the solution to these issues, recognizing that the software delivery process comprises values streams for all software-related products and services.Software delivery as a value stream
Forrester defines Value Stream Management (VSM) as:
“A combination of people, process, and technology that maps, optimizes, visualizes, and governs business value flow (including epics, stories, work items) through heterogeneous enterprise software delivery pipelines. Value stream management tools are the technology underpinnings of the VSM practice.”
Today there are a variety of tools that provide Value Stream Management capabilities as part of their existing solution for Agile planning, ALM or DevOps. Forrester includes Tasktop as an Enabling technology.
“Tasktop provides integration capabilities across a broad range of DevOps tooling – a core VSM capability that enables traceability, visibility, and connections between the people, process, and technology.” – Elevate Agile-Plus-DevOps with Value Stream Management, Forrester Research, Inc., April 20, 2018Download a Complimentary Copy of the Report by clicking on the above image
Tasktop’s Value Stream Management solution connects an organization’s value stream by integrating the network of best-of-breed tools and teams for planning, building and delivering software at an enterprise-level.Tasktop provides Value Stream Management
Tasktop addresses two core challenges in managing value streams:
- Software delivery work is invisible knowledge work. There are no physical materials to observe as they move through the value stream. It’s hard to comprehend something you cannot see, and even harder to manage it.
- Unless fully automated, transitions between work centers are informal and untraceable. Handoffs take place over email, phone, chat, in spreadsheets or face-to-face meetings. The value stream therefore exists, but only implicitly. It is not tangible – and therefore incomprehensible.
- Automates the flow of information across the value stream – enables the frictionless flow of artifacts (such as defects, user stories, trouble tickets), as well as information from events (such as build failures, version control changesets, security scan vulnerabilities and performance monitoring alerts), across the tools and stakeholders in the software development value stream. This removes non-value added work and bottlenecks; increases velocity and capacity; enhances collaboration; enables automated traceability and even improves employee satisfaction.
- Provides end-to-end visibility into the value stream – when managers want to see metrics and dashboards to understand project status, to optimize the process or to ensure compliance, it has been nearly impossible to get a real-time, holistic view across unintegrated tools. Tasktop unlocks lifecycle data from these application tool silos by automatically compiling lifecycle activity data into a single database. This data can be used to create consolidated, full-lifecycle reports and dashboards, as well as for traceability reporting.
- Creates a modular, Agile toolchain – software innovators require a best-of-breed tool strategy. Tasktop enables organizations to use the products that best support each discipline while getting the benefits of a single, integrated toolchain. Drives more value from each tool; allows organizations to easily add, replace and upgrade them, creating a proactive environment for innovation.
With a fully connected value stream, organizations can begin optimizing total end-to-end lead time – Time to value (TtV) – from initial customer request to software in operation:Want to know more about Value Stream Management?
Speak to us today about a free one-hour consultation with one of value stream experts to begin mapping out your value stream, assess the VSM capabilities of your existing tools, and obtain an on the spot health assessment of your value stream to begin immediately optimizing your software delivery process to get ahead of the curve.Further reading
The post The Time Is Now: new report highlights how Value Stream Management tools can “inform product plans and priorities and point to ways to further optimize software delivery and quality” appeared first on Tasktop Blog.
Code review tools such as GitHub PRs and Gerrit do a a great job of enabling developers to collaborate, but their file- and line-oriented approach leaves abstractions out of the picture. Developers must have the capacity to see the abstractions by reading the code, or risk having code reviews devolve into a critique of syntax, whitespace and implementation detail. The intrinsic ability to see abstractions comes easily for some, but others are at a disadvantage especially when less familiar with the codebase. To address this issue, I conducted an experiment in surfacing code abstractions as a diagram in code reviews.
My premise was that a static class diagram would enable developers to see the major abstractions and their relationships independently of the implementation detail. By having this higher level view of the code related to a change, developers would have a deeper understanding when performing a code review. My goal was two-fold: to enable ramp-up of less experienced engineers, and to draw more attention to the abstractions since in many cases they as if not more important than the implementation. (A poor abstraction implemented perfectly is in many cases less valuable than a great abstraction with a sub-optimal implementation.)
To run this experiment, I created a plug-in for Gerrit that dynamically generates UML static class diagrams for any code review. These diagrams are intentionally missing detail, taking the UML-as-sketch approach to avoid overly-complicated diagrams.
I was looking for answers to the following questions in this experiment:
- Could diagrams be automatically generated that were good enough to communicate the major abstractions?
- Is it possible to pull in more of the surrounding context so that the changed code can be evaluated within the context of a larger codebase?
- Would developers find these diagrams useful?
- Would these diagrams affect the code review process sufficiently to driver higher quality changes to non-trivial systems?
The result of this experiment was surprising. A simple, well-structured code review produced the following diagram:
This is what it looks like on the Gerrit code review:
So far, so good. But, what about a more complicated example? Following is a diagram generated from a real contribution:
This diagram is much harder to read, even when zoomed in. My initial reaction was that this diagram was not useful at all. The code and its surrounding context were simply too complicated to enable generation of a useful diagram. After saying that to myself, it dawned on me: This is exactly why the diagram is useful. The generated diagram is exposing code for what it is: overly complicated with poor abstractions and too many dependencies.
In my experience, most developers aren’t as practiced at reading static class diagrams as they are at reading code. For these diagrams to be useful, developers would need to become proficient at reading them. Proficiency comes with practice, but for that to happen, developers would have to be willing to try out the approach.
This experiment isn’t over. The next step is to try it out for a few weeks and get feedback from developers.
What do you think, would this be helpful for you or your team? If you’re interested to try it out, the project has been published on GitHub.
this article was originally published at greensopinion.com
Transportation operators have taken a huge leap in the last decade since data connectivity has become available everywhere. Their digital transformation is awe-inspiring.
A fascinating episode of NPR’s Planet Money podcast described UPS’s transformation from a package delivery company to a technology company. The signature brown trucks have become rolling computers full of sensors. Every step a driver takes, every mile she drives, are tracked and analyzed by the company to increase efficiency. UPS is using every speck of information to strategize about the drivers’ tiniest decisions in an effort to optimize further, down to which pocket they use to hold their pen. One minute saved per driver per day saves UPS $14.5 million over a course of a year.
A driver’s handheld computer is equipped with GPS, tracking the driver’s every step. The truck is wired with hundreds of sensors, sending millions of data points to a central data warehouse, where they are crunched and processed. A team of data analysts then combs through the data to discover new ways to shave seconds off deliveries to increase productivity. Today, drivers make 130 deliveries per day, compared with around 90 before this digital transformation.
Kudos to UPS for recognizing that extreme optimization should not come at the expense of employee satisfaction. Since introducing this digital transformation, UPS has doubled driver wages and compensation.
Waze, the community-driven navigation app, is another prime example of how transportation has been transformed by millions of real-time data points from thousands and thousands of drivers.
Based on each vehicle’s location and speed, which are collected passively, as well as incidents actively reported by drivers (road closures, accidents, traffic, police), Waze can find the fastest route to every destination and provide an accurate ETA. The best part? Waze will reroute you if it finds a better route as traffic conditions change.
Isn’t it surprising that the transportation industry is using real-time data to optimize how they deliver value while most software delivery organizations don’t?The Real-Time Software Delivery Map
Just like transportation operators, software delivery organization strive to deliver customer value faster and better – better than they’ve done it in the past, and better than any current or future competition.
But so often neither the business nor IT know exactly what’s happening in real-time and when the work will be done. When will that feature or product be running in production? When will that problem be fixed? When is that new modernized system going to replace the legacy stuff causing so many problems?
Despite best intentions and a ton of hard work, for many decision makers as well as contributors, the answer is “I don’t know”. I don’t know when it can fit in the queue, I don’t know how long it’s going to wait at the work centers downstream from me, I don’t know how long it will be until it makes its way to the top of the priority list.
Why can’t we have Waze for software delivery? If huge traffic networks can create real-time optimized routes and accurately predict ETAs, why can’t we do the same for pure software delivery?
The answer is we can, it is completely within every organization’s reach. But it requires three things:
- You need to lay down an infrastructure of connected roads and overpasses for the work to seamlessly travel without getting stuck
- Each work item will need to report its location and status in real time
- You’ll need a system capable of collecting and compiling all those data points and drawing a map of the how value flows through your teams and finding the fastest routes to production
This is what is being referred to as Value Stream Management. We talked about it on a recent webinar.
As we all know, products don’t get created out of thin air. Each specialist contributes their part by working in a specialized tool. Portfolio managers work in PPM to create plans and assign budgets, product owners and business analysts work in requirements management to define features, developers work in Agile and issue tracking to create the design and code, test engineers work in test management to design and run tests, and so on.
The value lives in the small units of work, or “work items”, housed in those tools – your Jira, Jenkins, ALM, ServiceNow, and many, many more. Those features, stories, builds, test cases, defects and tickets are capable of telling us their status in real-time, but often have no central “Waze” to report to. Plus, they cannot flow uninterrupted along the value creation route because roads have not been built between the tools. The work items exist in islands with no bridges.
Value Stream Management solutions, like Tasktop Integration Hub, are essentially Waze for software. They connect the tools to let the value flow seamlessly from tool to tool, team to team, specialist to specialist. They gather the individual work item statuses in real-time and reframe them in business terms – so people can see how customer value is flowing. And they communicate the overall picture of how value flows from inception till its final destination, helping organizations make adjustments to get there fastest by eliminating roadblocks, circumventing bottlenecks and rerouting items.The Road Not Taken
IT faces no shortage of potential ways to spend its budget, what with Agile and DevOps, modernization initiatives, new technologies like AI and machine learning. What justification is there to putting Value Stream Management at the top of that list?
Perhaps the New York City subway system can provide some answers. According to the New York Times, in New York City trains are terminally late, obstructed daily by a cascade of system failures. During the first three months of 2017, three-quarters of the subway’s lines were chronically behind schedule.
The M.T.A. doesn’t have the capability to gather real-time data from its trains. Every incident requires a lengthy post-mortem where inspectors must try to gather minute by minute accounts of what happened, from a signal system dating back to the 1920s and 30’s.
Just to be on the safe side, the M.T.A. has been actively slowing down its trains since the 1990s following some fatal accidents. As a result, they’ve reduced throughput, which has increased crowding, which slows trains down even more. End-to-end running time during peak hours has increased by more than six minutes in the four years between 2012 and 2016.
The New York City subway needs to deliver more people than ever before – close to 6 million passengers a day. Yet it has infrastructure completely inadequate for the task, and as a result it’s falling behind in every conceivable metric. Now, something in the range of $110 billion is required to overhaul the Subway system and it will take many, many years to do so. This may even cost Governor Cuomo his office.
The subway is so critical to New York’s economy we believe it will get bailed out and fixed. It’s lucky in that way.
However, a software organization that delivers fewer features while creating worse customer experiences? It won’t be so lucky.
Speak to us today about a free one-hour consultation with one of value stream experts to begin visualizing the value streams that exist your business to deliver greater customer experiences.
Tasktop Integration Hub 18.2 is available today, introducing a brand new metrics dashboard, support for ALM project domain changes, support for additional operational databases, and increased versatility in container support (hierarchical structures).See What’s Flowing with the New Integration Metrics Dashboard
Tasktop has a brand new dashboard where you can see the volumes of artifacts created and updated by your integrations over time. The dashboard illustrates the value of integration to your organization, while also providing a window into interesting trends and patterns in your integration activity.
The new Metrics Dashboard helps you understand just how much data Tasktop is synchronizing over the last 30 days. A graph displays the total number of artifacts created by Tasktop and the number of artifact updates in that time period.
Customers with the Ultimate and Enterprise licenses can filter the dashboard by repository and integration and apply additional time frames. They can also view a tables that display cumulative information on all activity, sliced by repository and integration.Easily Accommodate ALM Domain Changes
Organizations using Micro Focus (HPE) ALM may periodically need to change a project’s domain. With version 18.2, Tasktop Integration Hub will alert and prompt you to update your collections when a project’s domain name has changed. Admins can easily point the collection at the renamed projects, and the integration will resume seamlessly.Replace Invalid Projects in your Collection Premium Content for Tasktop Users
If you’re a current Tasktop user, you know we have premium content just for you. That includes our full connector documentation, all the answers to your Frequently Asked Questions, and each versions’ release notes.
Now you can access all that gated content from one centralized site with a single login: https://docs.tasktop.com/tasktop/premium-contentPremium Content What Else?
- Hub’s Gateway integration style is used to create and update artifacts based on events in your DevOps tools. Version 18.2 adds the ability to create new folders (“containers”) and work items within a folder structure using Gateway. Enterprise Data Stream can report on containers as well.
- Version 18.2 adds support for MySQL and PostgreSQL as Tasktop Integration Hub’s operational database.
Questions? Do not hesitate to contact us directly.