Menu

Subscribe to Blog feed Blog
Connecting the world of software delivery.
Updated: 1 hour 22 min ago

Gain powerful insights into your SAFe transformation with Value Stream Management

Wed, 09/19/2018 - 08:09

If traditional enterprises want to tap into the immense business possibilities created by the Age of Software, they need to accelerate the value delivery of their digital products and services – and fast. This isn’t ground-breaking news – most organizations are acutely aware that they need to scale their software delivery capacity and capabilities to extract greater business value from IT. That’s why many CIOs and other IT leaders have invested in Scale Agile Framework (SAFe). But the universal struggles of traditional enterprises trying to execute a successful digital transformation indicate that SAFe is not enough. Something else is needed.

Full SAFe represents the most comprehensive configuration, supporting building large, integrated solutions that typically require hundreds of people or more to develop and maintain

Aligning the enterprise with the software delivery process, SAFe unifies individual Agile development teams who working concurrently on different/multiple features for one product/large project through the Agile Release Train (ART). Urgent/priority/core features to meet core business strategic needs are triaged and progressed by individual teams with the aim of delivering an integrated product as quickly as possible to shorten time to value (TtV).

Yet for many organizations, they’re still struggling accelerate TtV despite ramping up large-scale development. And that’s in addition to investment in the best specialist people and tooling, and other methodologies such as Agile and DevOps. Even more troubling, they don’t know why it’s not working.

One of main reasons for an ineffective and/or unmeasurable SAFe transformation is the inability to see how software is being planned, built and delivered, and how it’s tied to business outcomes. How do you find and fix the problem (such as waste and bottlenecks) to improve TtV if you can’t see the business value that software delivery creates? How do you see how your SAFe transformation and initiatives are performing? The key is to make that work visible and traceable from end-to-end through Value Stream Management (VSM). By connecting the tools and teams within the “ideate”, “create”, “release” and “operate” stages, enterprises can see the flow of work from the business to IT to the customer and back.

 

Many tools are involved in the four main phases that make up   the activities and processes of software delivery

In her latest byline for Hacker Noon, Tasktop’s Naomi Lurie explains that while SAFe is a significant guide to success, it’s not a magical methodology. She goes on to explain how Value Stream Management connects all the key elements that underpin SAFe.  Through VSM, CIOs can “scan” their software delivery value streams to expose the flow of activities that create business value, helping enterprises to measure, optimize and build on the foundations provided by SAFe to effectively scale their software delivery.

Read on: How to Ensure Your SAFe Transformation Makes a Difference

Visualize your value stream – consulting sessions

This free one-hour consulting session will help you identify the value streams within your organization today, visualize the flow of work, and help identify opportunities to make your value stream more tangible – and your SAFe transformation a success – Sign up now!

A CIO’s journey

 

Further reading Click to image to download Click the image to download Click the image to download

The post Gain powerful insights into your SAFe transformation with Value Stream Management appeared first on Tasktop Blog.

Product Managers, spam your company! How a simple email can make your life better and deepen your company’s understanding of the product roadmap

Fri, 09/14/2018 - 08:37

Last week I sent an email to the whole company. Here were some of the replies:

  • I love these emails!
  • Thanks for this. Really helpful.
  • This email was much appreciated – concise and extremely informative. Thanks for taking the time to write this up!

I don’t think I have ever replied to an email with this much enthusiasm, so it’s very humbling that anyone would reply to me like this –especially when it’s an email to the whole company.

But what did I do to deserve that response?

About once a month, I send an email to the entire company with a subject line of “A Peek into the Mind of Product”. It’s an informal newsletter to everyone in the company that does just what it says…it gives a peek into my mind and what I’m thinking with regards to Tasktop and the future of our product.

I know that may sound a little self-important – after all, I’m one Product Manager within a Product team of 10, but I urge other product managers out there to follow suit.

This email lists some of the features that are on top of the backlog and why they’re there. I give some context about what the feature is, why it’s great and why we want it. I also list out some of the features that we’re not going to deliver in the near future. Giving Sales and the Business an early heads-up helps them get their story ready for customers who want the feature. Telling Sales “no” early on can actually be a blessing for them.

Finally, it’s a great forum to preempt conversations I know will be coming my way.

These past few paragraphs are all about why it’s great for me to send this email out. But why does anyone else in the company (much less the whole company) care?

And one thing to keep in mind – this is all on top of weekly intradepartmental calls, individual calls, emails, Slack messages, etc. This is just one more way for me to communicate to the various stakeholders in the company. It’s not a substitution for any of those other channels.

Why I do it?

The role of Product Manager is all about communication. Feature prioritization, feature design and delivery only matter if the communication is sufficient.

These ‘peek’ emails give everyone the proper context about decisions that they may otherwise feel are made in the dark. And not only do people get the context, but they get it early, not after the fact. This is critical.

These also give people a fantastic opportunity to provide their feedback. On more than one occasion, someone has reached out to me after I sent one of these emails and said “hey, there’s new info on X”. I have no doubt that info would have come to light eventually, but the email brought it to their attention.

It’s like Cunningham’s Law :“to get the right answer on the internet is not to ask a question; it’s to post the wrong answer.”

Also, there’s always a relevant XKCD comic

https://xkcd.com/386/

These emails also help people in other departments see that there’s a real person behind these decisions. Few things are worse than getting bad news if passed down from on-high with no context and no sense that your voice has been heard. Knowing that I’m putting some thought into these decisions helps soften the message. People have the chance to see that there’s more to the story than their specific request. This gives them the ammo they need to have difficult conversations with their customers.

How did it start?

I don’t recommend just starting off sending out a three-page email to your whole company without any warning. It can come off  presumptuous and spammy. This initiative first started with a simple email to the rest of the Product team. It was a bucket of features I was thinking of prioritizing for an upcoming release. People loved it.

I then reached out to some individuals on our PreSales & Support teams. They loved it. Previously, they had asked to see a ranked backlog, but the context setting gave them so much more information. Pretty soon, other departments heard about these emails and wanted in. It was to the point that it was harder to know who to keep out of the emails than who to include.

This has even expanded to the other Product Manager at Tasktop. She and I work closely together, but have separate areas of concern.

Why you should email your whole company

First, in the world of Product, it’s important to over-communicate. No one hears you the first time and yet it’s so important that everyone understand not only what is coming, but why it’s coming. You need to have multiple avenues of communication.  This is simply one more channel to get your message out.

Second, it will make your life easier. Instead of multiple one-off conversations about “why isn’t my feature being prioritized”, you can state the business case for what the product is going to look like and can make your case in a more holistic fashion and ensure everyone can see it.

Third, it’s a great opportunity to make sure that you’re thinking of everything. It’s so much easier for people to react to a plan than to make one themselves. This gives everyone else the chance to react in a way they couldn’t otherwise.

So I urge all the Product Managers out there…send more emails. Let your company know what you’re thinking. You may be surprised at the reaction you get.

The post Product Managers, spam your company! How a simple email can make your life better and deepen your company’s understanding of the product roadmap appeared first on Tasktop Blog.

How CIOs can make software delivery their strongest asset

Wed, 09/12/2018 - 13:04

A lot rests on the shoulders of CIOs and other IT leaders at traditional businesses. They face an almost daily battle against digital disruption. With startups and other digital-native companies gobbling up market share through software-driven products and services, one inalienable truth has become clear: software is responsible for a greater and greater share of the business value that they’re providing.

Yet for many years, software delivery has been a thorn in the side of most CIOs. As Tasktop’s Naomi Lurie writes in her byline for Hacker Noon, “the issues plaguing software delivery may feel insurmountable. Despite a lot of investment in accelerating and scaling software delivery, internal customers still complain IT is not delivering value fast enough, compared to the nimble newcomers nipping at their heels.”

Software delivery is sadly viewed a liability, not an asset. But what if there was way to turn software delivery into strength?

CIOs can be excused for responding to that promise with an eye roll and sarcastic thumbs-up. After all, that’s probably what you thought would happen when you invested heavily in Agile, DevOps, tooling and specialist personnel. I know I did. As you will see in my story below, I was at the end of my rope:

But fear not, it’s not all doom and gloom – as Naomi points out her article, there are three key ways to address the issues derailing your software delivery:

  1. Product-oriented thinking
  2. A connected value stream network
  3. A new managerial framework devised for software delivery

Read on to learn how you can transform the way your organization perceives and delivers software to improve the time to value of your products and galvanize your business.

Begin your Value Stream journey

Become a VSM ambassador at your organization by completing our complimentary Value Stream Management training to help you start visualizing, measuring and optimizing the value streams that exist within your business.

Further reading Click the image to download Click the image to download

The post How CIOs can make software delivery their strongest asset appeared first on Tasktop Blog.

We’ve got you covered – Tasktop instantly supports Atlassian’s new product Jira Ops

Fri, 09/07/2018 - 08:00

Just three days ago, Atlassian announced the launch of Jira Ops, “a unified incident command center” that gives teams a single place for response coordination. Jira Ops, which aims to help teams resolve outages faster and incur fewer incidents over time, is available through Atlassian’s early access program and will be generally available in early 2019.

Here at Tasktop we wasted no time in checking out how we could support Jira Ops as part of a fully integrated best-of-breed toolchain. Because in practice, most software delivery organizations use more than one tool to accommodate the specific needs of different teams. So potentially they’d need a solution that can seamlessly flow incidents from Jira Ops to developers working in another tool like VSTS, keeping everyone in sync and eliminating the waste and overhead that comes from double entry.

And, what do you know? It works like magic. Thanks to our robust integrations, we instantly supported the new Jira Ops functionality.

We created a three-minute tech preview video demonstrating how Tasktop facilitates incident escalation across tools. In this video, we show how a new incident raised in Jira Ops automatically flows to the Dev team in VSTS as a defect. Changes or updates made to the defect in VSTS will flow back to the incident in Jira Ops, keeping everyone collaborating on the resolution in sync.

A cool feature of Jira Ops is that it creates a new Slack room for each incident. By virtue of Tasktop updating the Jira Ops incident with changes made in VSTS, any change to the description, Assignee or Reporter in either tool, or the addition of a discussion point will also post a notification to the Slack channel.   

Tasktop expects to announce our formal support for Jira Ops when Atlassian makes it available for sale. In the meantime, organizations that rely on Tasktop to integrate their toolchain can rest assured they can benefit from all the new features and functionality from their tool vendors.

Also, check out our latest blog and video showing how Tasktop supports the new Agile Development and SAFe features launched in ServiceNow’s London release yesterday.

Request a highly-customized demo to see how Tasktop connects all the best-of-breed tools in your value stream to optimize how you to plan, build and deliver software at scale, accelerating the time to value of your digital products.

The post We’ve got you covered – Tasktop instantly supports Atlassian’s new product Jira Ops appeared first on Tasktop Blog.

An integrated approach for end-to-end traceability in software delivery

Thu, 09/06/2018 - 11:55

The good news is that traditional enterprises are finally prioritizing software delivery to combat digital disruption. They’re growing and improving the quality of their digital product portfolio, as well enhancing their IT systems, to gain the competitive edge in the Age of Software. The bad news is that traceability in enterprise software delivery – i.e., tracking all the work and activities that delivers all these products – is becoming harder than ever.

For many of these organizations, especially those in heavily-regulated sectors such as finance, insurance and federal government, there is one overarching question – how do you optimize and innovate your software delivery while remaining compliant? Especially when all the work that takes place is invisible and travels through a complex network of people, tools and processes?

For safety-critical industries, the idea of introducing new tools, people and processes to accelerate delivery and time to value (TtV) can seem like a wave of red flags – a threat to their data integrity and security. Yet while change is never easy, when it comes to the frenetic digital world, it’s a necessity.

That’s why a new modern approach to traceability must be adopted – especially when product lifecycle profitability can be affected if software isn’t properly maintained, or if current or future regulation changes can’t be easily addressed.

Download the e-book

The post An integrated approach for end-to-end traceability in software delivery appeared first on Tasktop Blog.

How Tasktop and ServiceNow can help IT organizations to accelerate their time to value [VIDEO]

Wed, 09/05/2018 - 07:13

At DOES London 2018 in June, we were delighted to see that IT organizations around the world are increasingly focusing on how work flows from ideation to production across their software delivery value streams to build better products faster.

It is now abundantly clear that enterprises must think “end-to-end” and connect all the specialist teams and tools that help plan, build and deliver software at scale, and automate the flow of work between work stations, if they’re to accelerate the time to value (TtV) of their digital products.

As you can see from our Value Stream Management offering, we’re playing a critical role in linking all specialist teams at key stages of the enterprise software delivery process – from the initial customer request/ideation to creation, release, operation and back through the customer feedback loop. Crucially, we’re helping the world’s leading organizations to think beyond Agile and DevOps to support large-scale transformation.

And our best-in-breed offering continues to get stronger as our partners – who offer leading tools at all these key stages of the software delivery process – evolve their products and solutions too. For instance, ServiceNow took a big step this year to enhance their offering through the release of ServiceNow London.

Recognizing the need for a “tactical and strategic balance” between development and operations to swiftly and continuously deliver great software in a fast paced world, London adds support for customers’ SAFe 4.5 initiatives, as well as continuing to support Agile development.

Tasktop continues innovating at the pace of our partners such as ServiceNow. The following video shows how our joint customers can immediately adopt the new capabilities in in ServiceNow London by connecting ServiceNow to the tools Agile teams are already using, such as JIRA.  While teams scaling Agile initiatives with VersionOne can flow Epics, Features, Stories and other work bidirectionally with the new SAFe 4.5 application in ServiceNow.

The use cases shown in the above video are just the tip of the iceberg.  Only Tasktop can connect ServiceNow London to 50+ other tools used to build software in your organization.

Request a highly-customized demo to see how model-based integration can provide a reliable, scalable and easy-to-use infrastructure to help you grow and optimize your enterprise software delivery.

The post How Tasktop and ServiceNow can help IT organizations to accelerate their time to value [VIDEO] appeared first on Tasktop Blog.

The 5 Best Metrics You’ve Never Met

Wed, 08/29/2018 - 11:48

Technology companies working to stay relevant in their market all seem to be waist deep doing some kind of transformation. Agile transformations, Digital transformations and DevOps transformations are ubiquitous as companies attempt to change the way they work in hopes of improving business outcomes.

Measurement and metrics are a key part of any transformation. When it comes to assessing a transformation – to see if the needle is moving in the right direction –  performance metrics have come under intense scrutiny. Traditional performance metrics, such as counting the number of lines of code and the number of software bugs should be used with caution, because there are bugs that are not worth fixing and code that is not worth maintaining. These old-school performance metrics represent activities, not outcomes. Activity metrics tell organizations very little about the true impact on business goals.

Activity metrics focus on busyness, but busyness does not equate to business value delivered. People can be remarkably busy all day long dashing from meeting to meeting with no increase in business revenue or reduction in costs. Traditional metrics may be free and easy to measure in existing tools, but are they beneficial? Think about the behaviors that occur in your organization by how metrics incentivize people. What we measure impacts people because people value what is measured. We need to find better ways to measure outcomes.

So – what to measure? Consider Flow metrics. Flow metrics are performance metrics that reveal trends on desirable business outcomes — such as faster time-to-market, responsiveness to customers and predictable release timeframes. These business outcomes play an essential role in successful transformation efforts as the bar to remain relevant in the future keeps rising. Flow metrics correlate with the generation and/or protection of business value. Allow me to introduce you to five powerful Flow metrics.

(At the summit of the DevOps journey is the ability for teams to deliver more frequently to be more responsive to their business.  To do this, they must accelerate their delivery capability. This requires increasing flow. My prior boss would frequently kid me asking me why we want to see things flow.  The answer was to see where they stop or slow down. )

Flow Time

Flow time is a measure of how long something takes to complete from beginning to end. You might be thinking, “Wait, that’s cycle time.” And you might be right. It depends on the context as to which definition you use. Depending on whom you ask, “cycle time” has different meanings and the clock may start or stop in different places. Just know that cycle time is an ambiguous term and that’s why I prefer to use Flow time when discussing speed metrics. Because Flow time is an unfamiliar term to most, it provides an opportunity to clearly define the meaning. Flow is value pulled through a system smoothly and predictably, and is the first of the three foundational principles underpinning DevOps  

To determine where to start the clock for your context, consider the Flow time illustration below. The clock starts ticking when the request is approved and ends when the change is up and running in production.

In comparison, the clock for Lead time starts with the initial customer request. But, sSimilar to bugs that are not worth fixing, some customer requests are not worth doing. Take Apple for example. With popular products, the number of changes requested is so high that it is not possible to triage them all. Popular open source projects have similar problems.

Flow time has a start time and an end time. That’s all. Flow time doesn’t stop the clock just because the weekend rolls around. What flow time does do is help quantify the probability of completing x percent of work in so many days.

Collecting historical Flow times that show, for example, that 90% of a certain type of work gets delivered within 30 days allows us to say that 9 out of 10 times, we deliver these kinds of requests within 30 days. We know then that there is a 10% probability that some work will take longer. This is important because it helps us become more predictable with our customers.

Flow Efficiency

Good metrics help others to see a clearer picture and help set more accurate expectations when it comes to questions like, “When will it be done?” Due dates rarely take wait time into consideration. The problem is usually not in the work time—it’s in the wait time.

Think about delays from dependencies on other people – wait time matters more when it comes to how long things take, than the actual size of the work. You are better off estimating the wait time than the work time. Wait time often consumes 85% or more of Flow time.

The WIP Report

A training team that concentrates together on producing training materials progresses faster on the training collateral during weeks when they are not also traveling to customer sites and speaking at conferences. A marketing team progresses faster when they work on 7 initiatives at one time instead of 13. College students finish their homework sooner when they take two classes instead of three classes. One can argue that it depends on the complexity of the work. The homework for three freshman-level classes may take less time to complete than the homework for two graduate-level classes. And this is why it’s important to break up work into smaller bits that can be completed and delivered quickly. The quicker the delivery, the faster the feedback.

Too much Work-In-Progress (WIP) (referred to in the Flow Framework as “Flow load”), opens the door for more dependencies, more conflicting priorities, more unplanned work to creep in, which causes delays. Capturing WIP trends and comparing them to Flow time results can help you see the relationship between WIP and speed in your organization.

The Aging Report

Aging reports reveal how long work has been sitting in the pipeline not getting done. Looking at all the work that’s been in the system for more than 30 days (or 60 or 120 days) shines a valuable light how much waste is in the system. This example compares the average duration of work items and highlights the ones that are taking longer than the average.

Flow Distribution

Categorizing work into different work types supports changing work priorities and report data filtering., Flow Distribution shows the targeted (and the historical) proportion of work item types, bringing visibility to planned work allocation. When work is categorized, you can filter on work type for reports, such as the WIP reports, which in turn can help you improve your WIP allocations and your predictability.

Depending on context, allocations may need to change. If you’ve just released a new feature, tackling defects or debt may take priority over introducing more features. If you continue to do more feature work, it will steal capacity away from other work, like fixing problems related to  tech debt. Categorizing and measuring work type distribution helps you prioritize accordingly.

Mapping Metrics to Outcomes

Keeping pace with the future requires change. When it comes to transforming the way teams work, consider mapping metrics to desirable outcomes to improve decisions during transformations – or during other times filled with uncertainty.

If time-to-market is one of your desirable outcomes, (because people complain about how long things take), measure Flow time to help others see just how long things actually take.

If efficiency is one of your desirable outcomes, (because people are blocked waiting on specialists or events), measure Flow efficiency to see where bottlenecks exist, so you can focus on areas that will improve flow. When it comes to flow, It doesn’t do much good to optimize an area that is not the bottleneck..

If teams are dealing with unplanned work and/or conflicting priorities, measure the amount of WIP to expose overallocated teams. When it comes to efficiency, time is wasted when there is too much focus on resource efficiency over flow efficiency.

If important unfinished work (such as fixing security vulnerabilities) is neglected, measure the age of partially completed work to expose risks. Like a bridge under construction, zero value is realized until it’s finished.

If certain types of important work (such as fixing technical debt) are not prioritized accordingly, measure work type distribution to bring visibility to problems related to allocations.

For a deeper dive into flow metrics, checkout Making Work Visible: Exposing Time Theft to Optimize Work & Flow by Dominica DeGrandis and Project to Product – How to Survive and Thrive in the age of Digital Disruption with the Flow Framework by Mik Kersten.

Final Tips

Beware of falling into the trap of optimizing for a single metric. A hyper focus on speed might not make you more predictable. It’s okay to ease off in one area to benefit the whole system.

Flow metrics look at the trends over time – instead of viewing single data points in isolation, ask – are we moving in the right direction?

If letting go of existing unhelpful metrics is too big of a change for your organization and meets with resistance, consider capturing Flow metrics in addition to your current metrics and compare the outcomes. Experiments are a great way to test new ways of working.

Quantitative measures are usually more accurate than personal perceptions and experiences. When you are knee deep in transforming, Flow metrics can help you make better decisions.

The post The 5 Best Metrics You’ve Never Met appeared first on Tasktop Blog.

A week in the life of Tasktop’s Product Team – from a Girl Scout

Fri, 08/24/2018 - 12:31

This year, each girl in my Girl Scout troop had the chance pick a career site for the troop to visit that reflected her career interests. We visited Tasktop because one of my troop-mates was interested in a career in computer science.

I had chosen to meet with a Japanese teacher since I adore linguistics, however sitting in a software development company was more foreign to me than any language I had ever encountered! That said, the more I learnt about the Product team at Tasktop, I more I became interested in spending a week at Tasktop as an intern.

Getting familiar with the concept of the Product Team certainly did not disapp­­oint. Even at the first meeting, stereotypes of engineers sitting at the computer with Java open everywhere in the office were dispelled. I had no idea about the amount of creativity, teamwork, communication, and drive involved in software development until I actually was in the office.

As for the actual software part, that’s where things get muddled up in my brain. Somehow I’ve managed to grasp the concept of Hub, the main product that Tasktop actually makes. It’s essentially a system to hold data that can be easily translated to any of the repositories with the connectors (to me, at least), and it logically challenges you to think of new ideas and ways to improve this system without having to actually know much about coding.

Turns out, there’s more to a software company than just code, which was news to me. Of course, it’s a company, but I never exactly processed the fact that it’s still a company that also has to sell and market a product (although I think I would much prefer working in Product!). The amount of integration between different skills in Product is appealing to someone like me who admires design, communication between different languages, and puzzles for the brain to figure out.

I’ve certainly enjoyed my time being an intern at Tasktop. The subject matter was surprisingly incredibly engaging and exciting, and the people I got to interact with were amazing to be with.

Thank you for having me, Tasktop!

The post A week in the life of Tasktop’s Product Team – from a Girl Scout appeared first on Tasktop Blog.

Managing Time Off with a Deep Team Roster

Tue, 08/21/2018 - 11:00

It’s inevitable that there will be times when key members of your team will take some time off. As much as we would love for our key team members to always be available, we all need time to step away from work to prevent burnout, refresh our minds, or take care of personal situations.

One drawback, however, is that the rest of the team can be deprived of a large chunk of knowledge and manpower during this period. In many cases, having a bigger team can at least help reduce the loss of labor and leaves you with an almost “full” roster. While more numbers are good, it is vital that teams develop a “deep” team roster to effectively handle time where members of the team are unavailable.

My experience

To give an example of what I mean by a “deep” team roster, my team was recently put in a position where one of our Product Managers was out on parental leave. Being a Product Analyst on the same part of our product, I was lucky enough to be given the opportunity to step in and essentially act as interim Product Manager during this time.

I had not had much experience being a Product Manager, so this was a pretty daunting task to take on. I had attended many calls with our Product Managers, so I had at least gained a secondhand experience with much of the work and the meetings that they took part in. Seeing and actually doing the work are two very different things though, so I had no idea how this would turn out.

What I did not realize is that I had gained much of the vital Product Management knowledge in only a year since starting at Tasktop. When meeting and working with partners, customers, and my own colleagues at Tasktop, I was capable of answering most questions and making decisions regarding our product.

My previous work experience, and secondhand experience with the Product Managers, gave me a good foundation to confidently do what was asked of me. And for situations where I wasn’t fully confident on a decision or answer, other team members could fill in the blanks needed to make the call in these situations. I was not alone during this time due to my whole team having the full knowledge necessary to fill the void of our unavailable Product Manager.

This was my inspiration to write this piece because I realized that my team not only had the numbers and manpower to make the team feel “full” while our key team member was out, but we also had a “deep” pool of knowledge where there were no major gaps among the whole team despite being down a key team member. Our dedication to empowering and growing every team member made this possible.

How to build a deep team roster

My recommendation is that you should make an effort to disseminate knowledge across the whole team to enable team flexibility and balance. You never know when situations will arise where your key team members need to take some time off, and you should put your team in a position where a time like this doesn’t hinder productivity or chances of being successful. I’ll leave you with some examples of what you can do to help build a “deep” team roster:

  • Cross-team collaboration in meetings and on projects
  • Knowledge sharing sessions with the whole team
  • Establish contingency plans of which you can practice (more details about this in my colleague Trevor Bruner’s blog post Why we need to talk about contingency plans in software development 
  • Spread responsibilities among the whole team
  • Communicate with the whole team when answers aren’t readily available, you may be surprised with who can help provide an answer

Time off should be something to rejoice in, as it helps prevent burnout and ensures your team can stay highly productive over time. Taking the time to build a “deep” team roster will pay off by allowing the rest of the team to handle these situations confidently. And for the team members who take time off, they can truly get the most of this time knowing that they have left their work in good hands.

The post Managing Time Off with a Deep Team Roster appeared first on Tasktop Blog.

The painful experience of a Product Analyst with no software toolchain integration

Thu, 08/16/2018 - 08:00

When I joined Tasktop about three months ago, I quickly realized that Tasktop targets the exact same problems that I faced during my previous job as a Product Analyst.

In that role I was using various tools to ensure inbound open source and third party licensing compliance. To complete my daily tasks, I used a combination of internal and external software tools to coordinate license and agreement reviews.

In order to update the status of reviews and align data, I was required to manually copy and paste information across these tools. Because this was an error prone process, I would double check every time I entered data to make sure it was correct. This made my job a lot more difficult because of two clear reasons:

1. Lack of Automation and Integration

I often wished that these recurring processes could be automated. I eventually realized that it was difficult and time consuming to plan, validate, and execute on automating an entire process, even if it was a smaller process within the company. Furthermore, aligning the understanding of the proposed process and obtaining approval from stakeholders would also be needed. This meant setting up meetings with colleagues from different teams and the involvement of various levels of management. Because it was often challenging to find a meeting time that worked with everybody, there were delays and long periods of time between each meeting. Things moved slowly and time was spent on recalling previously discussed topics. All of this essentially meant that it was easier to “suck it up” and continue with the manual and inefficient processes instead.

2. Lack of Visibility into Statistics and Status

For management meetings, I was asked to present the volume of reviews that we were receiving and processing. Because of a lack of integration between data sources, this required digging and consolidating data into a spreadsheet. A big chunk of time was also spent on cleaning up data to take care of duplicates and inaccuracies. Although the tools I used stored information relating to the same reviews and components, they weren’t integrated. This meant that we had siloed processes where the status of a review could not be easily determined when it had moved to another team or tool. This required a lot of back and forth communication via email, which created delays in order to follow up and proceed with reviews.

Conclusion

Although I may not have directly benefited from the use of Tasktop at my previous company, the problems I faced with a lack of automation and visibility are problems that I believe many people deal with on a daily basis.

Tasktop identifies these as prevalent problems at leading organizations around the world and provides a solution that increases efficiency, reduces overhead and accelerates value delivery. As I continue to learn more about Tasktop, I get more excited about my role at the company as a Product Analyst. I am looking forward to my work here and seeing how far Tasktop goes in helping organizations become more effective at delivering software at scale and extracting more business value from IT.

Are you a Product Analyst or someone else whose job is made unnecessarily harder through time-intensive manual work? Chat us today to find out how we can improve the productivity, quality and and value of your day-to-day work and your company’s digital products and services.

The post The painful experience of a Product Analyst with no software toolchain integration appeared first on Tasktop Blog.

“An amazing experience” – how Tasktop is creating a impactful and fulfilling co-op program for budding software developers and engineers

Tue, 08/14/2018 - 10:29

I have seen first hand how co-op programs can play a crucial role in helping students to carve out a successful career in software development and engineering. The chance to get out of the bedroom and classroom and finally be paid to apply your skills in the real world is an invaluable experience. It can also be incredibly exciting and life-changing too.

But it can also be scary; for many co-ops the unfamiliar territory opens the door to a swarm of what ifs? What if you don’t get the support and opportunities necessary to grow? What if you feel overwhelmed and under-appreciated? What if it’s eight months of wasted time? Making the right decision about during a co-op program and picking the right company is a big decision and shouldn’t be made lightly.

About Tasktop’s co-op program

As an Engineering Manager, former co-op (at two other companies) and current leader of Tasktop’s co-program, I have long grappled with these whats ifs. However, with the support of the company, I’ve looked to directly address them through a meticulously thought out co-op program that focuses on comfort, happiness, fulfilment and, of course, personal and professional growth.

Extensive on-boarding

With up to six students starting on the same day, we designed a 10-day on-boarding program to make them feel welcome, ensure their machines are setup properly, meet people from various departments, and learn the overall goals, processes and terminology used by the company. The students are then assigned to various teams across the engineering department.

One-on-one mentorship

Knowing that co-op students learn best when you have someone to guide them, we dedicate a mentor for each student. We set clear guidelines and expectations for the mentor to make sure that they can help their co-op students succeed in their roles. Someone has always got your back if you need it.

Part of the team from the get-go

Everyone who joins the company is instantly a Tasktopian. Co-op students are treated like our regular employees and they work on the same code base and go through the same process as the rest of the team every day. They participate in code reviews, as well as get involved in team decision-making. The entire team is there to support the co-op students and encouraged to help one another in the team if they can.

Regular feedback from management

Our managers do regular one-on-ones with their co-ops. We carry out monthly, mid-term, and final check-ins with the co-ops to make sure that they are comfortable working at Tasktop and to address any concerns they might have while working with us.

Q&As with VP of Engineering

We also host co-op Q&A sessions where students have the chance to ask our hugely experienced and knowledgeable VP of Engineering, Dave Wong, any questions regarding to their study/careers.

A work environment that is creative, fun and collaborative with code jams, forums and happy hours

Ryan Nosworthy, Senior Software Engineer and Shannon Benson, co-op student at Christmas Party 2017

There are also company events such as Tasktop Jam, Tasktop Forum and Happy Hour (free beer!), which are always hugely popular with our co-ops. We always aim to create a healthy, fun, collaborative environment where the co-op students can maximize their learning experience as well as be supported as part of the team.  While it’s a sad moment when they leave us to go back to school, there’s so much joy when they come back to visit us during Happy Hour or even better – come back as a full time employee!

The foosball table is always popular during Happy Hour and lunchtimes  What our co-ops have to say…

I’m delighted to announce our approach is yielding positive results and helping us to continuously refine it. Typically, Tasktop takes on 10-15 co-ops every eight months, representing approximately 10 percent of our engineering team. We have also seen nine former co-ops return to the company in a full-time capacity. We spoke to some co-ops past and present about their individual experiences.

Click play below to listen to an interview with a former co-op student, Victoria Chang:

Mandy Fung, former co-op and now Software Engineer 1, Tasktop

“Tasktop provides vast opportunities and has a great culture with a positive environment. As a former co-op, I was able to get hands-on experience from every aspect in software development while receiving the support I needed. The fundamentals that I’ve learned and the new things that I’m continuing to learn from Tasktop has definitely helped me in professional development.” 

Jaxsun McCarthy Huggan, former co-op and now Software Engineer 2, Tasktop

“I started as a co-op at Tasktop in 2011, and the company has changed a lot in the intervening years, but the thing that kept me coming back for more through my degree and as a full time employee has remained largely intact. That one thing is the company’s openness to having everyone contribute at any level. Whether it be working shoulder to shoulder on a team with multiple PhDs and Masters degrees as a 19 year old co-op, or being given the reigns to our internal lecture series “Tasktop Forum” shortly after joining the company full time, Tasktop has always been open and proactive with giving real responsibility and opportunity to everyone. That’s what let me know that Tasktop was the right place for me as a student and as a full time engineer.” 

Louis Belleville, former co-op and now Software Engineer 1, Tasktop

“Out of all the different places I have co-oped at, Tasktop stood out to me for a few big reasons. The training given did an excellent job of getting me familiar with the codebase, and throughout my entire co-op term whenever I had a question the answer was quickly forthcoming. The processes ran mostly smoothly allowing me to focus on programming and things I wanted to focus on, rather than meeting, reports, or other miscellaneous overheads. And most of all I found that I liked to work with everyone here, and that the company atmosphere and Friday Happy Hours were great.” 

Tim van der Kooi, former co-op and now Software Engineer 1, Tasktop

“As a co-op at Tasktop, I was given a comprehensive hands-on experience that I believe I would not have received at a larger company. I felt that I was given a large degree of freedom to work on projects that made a difference for our developer teams and was given an opportunity to get a breadth of experience in multiple tools and technologies. Additionally, the mentorship I received from some of the other developers on my team was second to none, as they were extremely helpful and patient with all of my questions along the way.

I returned to Tasktop because I knew I would be returning to engaging work in a growing company with a positive work culture. Many of my peers come from unique professional backgrounds outside of software development and I believe that contributes to a refreshing array of perspectives and personalities in the office. With happy hour at 4pm every Friday, there is a strong emphasis on social atmosphere and work-life balance that I enjoy.“

Tony Kong, UBC Science Co-op Student

“Working as an co-op at Tasktop on the integrations team was an amazing experience to say the least. Tasktop has a great open work culture and I was never treated as an co-op but as another fellow team member. I was given the freedom to experiment and make mistakes while always having someone I could depend on to guide me in the right direction. Everyone in the company was happy to contribute and help me grow as a software developer and as a person. I would highly recommend any future co-ops to come experience Tasktop.”

Griffin Tench, BCIT Computing Science Co-op Student

“Working at Tasktop has been a great learning experience so far and I am very happy to be working here. At Tasktop, you are surrounded by people who are both knowledgeable and helpful, and the company provides a working environment that makes it easy to learn and be productive. I highly recommend Tasktop to prospective co-op students learning software development.”

Shannon Benson, UBC BCS Co-Op Student

“My experience as a co-op at Tasktop was great. I learned many valuable new skills and greatly built on the object-oriented programming knowledge I had acquired through school. Along with that, the weekly Happy Hour and other social events provided great opportunities to meet other co-ops and people in the company. Overall, it was a very worthwhile experience and I would recommend it to any student as a co-op placement.”

Ryan Koon, UBC Science Co-op Student

“The endless opportunities to challenge myself at Tasktop helped to develop both my technical and soft skills. Coupled with Tasktopians’ eagerness to share their knowledge, whether it is related to software development or gravitational waves, Tasktop is an empowering environment that promotes wellness, growth, and innovation.”

Kiko Blake, SFU Computing Science Co-op Student

“At Tasktop, I’ve gotten to work in an environment where I’ve felt like another one of the team, where my opinion is heard and respected, and where I am able to learn with the support of my team members and mentors. I’ve been able to gain experience in a variety of areas, as I am included in all parts of the engineering process from daily scrums and weekly retros to new feature development and verification.”

Speak to us today!

For more information about our co-op program, please contact your school’s co-op coordinator or email us at recruiting@tasktop.com.

Want to know more?

The co-op experience at Tasktop

How to optimize the software development co-op/internship program 

Recipe for an amazing internship

What it’s like working at Tasktop 

The post “An amazing experience” – how Tasktop is creating a impactful and fulfilling co-op program for budding software developers and engineers appeared first on Tasktop Blog.

New Forrester New Wave™ report cites Tasktop as a strong performer in Value Stream Management

Thu, 08/09/2018 - 13:34

Following on from April 2018’s Forrester report that concluded the “time is now” for Value Stream Management, a new report – The Forrester New Wave: Value Stream Management Tools, Q3 2018 by Christopher Condo and Bill Seguinlooks into the capabilities and strategies of 13 of the most significant vendors in the Value Stream Management (VSM) market, including Tasktop and 7 of our partners.

“VSM is an emerging tool category that connects an organization’s business to its software delivery capability. VSM tools provide multiple roles — product managers, developers, QA, and release managers — a view into planning, health indicators, and analytics, helping them collaborate more effectively to reduce waste and focus on work that delivers value to the customer and the business.” – The Forrester New Wave: Value Stream Management Tools, Forrester Research, Inc., August 6, 2018

We believe that the report acknowledges that all vendors share elements of Forrester’s vision for VSM:

“To unify end-to-end visibility of software development; unify the capture of data, events, and artifacts within the process; define and visualize key performance indicators (KPIs) that are meaningful to the business; govern the processes with reusable templates; and provide an inclusive customer experience (CX) that allows multiple roles to collaborate and deliver more value than they would working on siloed teams.” – The Forrester New Wave: Value Stream Management Tools, Forrester Research, Inc., August 6, 2018

With software delivery becoming a major priority for organizations, tool vendors are responding to market demand from customers for better transparency into their software delivery process. As a company that has been expounding the need for a complete end-to-end view of how work and value flows across the software delivery value stream, Forrester’s assessment is music to our ears.

The methodology behind The Forrester New Wave

The Forrester New Wave differs from the traditional Forrester Wave in that it only evaluates emerging technologies, basing its analysis on a 10-criteria survey – analytics, common data model, governance, integration, mapping, value measurement, visualization, vision, road map and market approach – and a 2-hour briefing with each vendor.

The analysis groups the 10-criteria into current offering and strategy, as well as market presence. Each vendor has:

  • An annual VSM product revenue over $5 million
  • A cohesive VSM solution
  • Forrester inquiry experience and feedback from vendor client base

Tasktop – a strong VSM performer and visionary

In terms of the evaluation criteria, Tasktop’s differentiators were integration, vision, road map, market approach and common data model. In addition, Forrester noted that:

Tasktop is a best fit for companies that want a best-of-breed DevOps stack. Tasktop aims to integrate and coordinate value streams across your DevOps stack from ideation to production, but it will leave domain-specific work to other specialized tools.

What Tasktop’s customer have to say

“Tasktop’s customers praised its ability to give them a lens into their software development processes and provide transparency into their process flow.” – The Forrester New Wave: Value Stream Management Tools, Forrester Research, Inc., August 6, 2018

“If I only had a point-to-point interface it would be a mess. Tasktop can standardize inputs and offer functional flexibility.” – Customer reference quote in The Forrester New Wave: Value Stream Management Tools, Forrester Research, Inc., August 6, 2018

“Tasktop’s visual format lets you see flowing artifacts.” – Customer reference quote in The Forrester New Wave: Value Stream Management Tools, Forrester Research, Inc., August 6, 2018

Download the report

Tasktop is making that report available to interested parties at no cost – you can download it by clicking on the graphic below:

Click the image to download

 

Further reading

Click the image to download Click the image to download

Begin your Value Stream journey

Become a VSM ambassador at your organization by completing our complimentary Value Stream Management training to help you start visualizing, measuring and optimizing the value streams that exist within your business.

The post New Forrester New Wave™ report cites Tasktop as a strong performer in Value Stream Management appeared first on Tasktop Blog.

How to foster software developer productivity

Tue, 08/07/2018 - 08:45

Last month, we were very fortunate to have André Meyer come into Tasktop to give a presentation on fostering software developer productivity. For many years André has been working with a research team with one of our company co-founders, Gail Murphy, to address the ongoing supply and demand shortage in software delivery.

As “software continues to eat the world”, the need for software is outstripping our ability to supply it. Just how do we enable and empower software developers to be build better software faster and make them more productive?

As Meyer points out, researchers have been trying to solve that mystery for years to little avail. To gain a deeper understanding of the problem, Meyer highlighted three challenges in understanding and increasing developer productivity:

Challenge 1: Limited knowledge about developer work days

Challenge 2: Productivity is often measured by output measures only

Challenge 3: Most developers are not aware of productivity factors

To overcome these challenges, Meyer and his fellow researchers sought answers to three core questions:

  1. What does a software developer’s work day look like in terms of activities and work fragmentation and how does it relate to perceived productivity?
  2. Can we apply self-monitoring to increase developers’ awareness about work and productivity for a) teams and b) individual software developers?
  3. Can we devise approaches that foster productive behaviours at work through the provision of actionable insights?
Watch the presentation

You can learn about Meyer’s discoveries in answering the above questions in the below video:

Further reading

This white paper is based on research into 11 professional software developers from three international software development companies of varying sizes for four hours each.

The findings reveal the key factors that make developers feel productive, and provides compelling insight into how to eliminate the activities/tasks that drain developer productivity.

Click image to download white paper. First steps to speeding up your dev teams

Want to know more about measurement and improving the productivity of your software development and delivery teams? Speak to us today.

About André Meyer

André N. Meyer is a Ph.D student in Computer Science at the University of Zurich, Switzerland, supervised by Prof. Thomas Fritz. His research interests lie in developers’ productivity and work, and in creating tools that foster productive work by using persuasive technologies such as self-monitoring and goal-setting. He also works in the information technology industry as an application developer and consultant and has interned twice with Microsoft Research. His homepage is http://www.andre-meyer.ch.

Recently, he investigated developers’ perceptions of productivity and their context switches and fragmentation of their work. With the FlowLight, he and his colleagues successfully reduced costly interruptions at inopportune moments. With WorkAnalytics, he is increasing developers’ awareness about good habits at work, to foster productive behavior changes at the workplace.

Find out more about his work:

The post How to foster software developer productivity appeared first on Tasktop Blog.

What enterprise software delivery can learn from a woman hackathon and machine learning

Thu, 08/02/2018 - 08:43

What a special experience. An old friend and colleague, Lynn Pausic, one of the co-founders of Expero – a company with extensive experience in machine learning applied to complex business and technical problems – asked if I would help judge a “machine learning hackathon for women”. How could I say no to that?

Eight teams of women presented highly innovative and varied ideas for how machine learning that could be applied to do good in the world, help improve and save lives, and even make home-cooking easier!

Other ideas included applying machine learning to help fire departments predict what kind of calls they would get based on what type of large scale disaster/event has occurred; to parse conversations on things like Slack to be able to identify “angry” conversations; and help understand why reviews for a company product were positive or negative.

But putting aside the inspirational aspect of being surrounded by incredibly talented women tackling highly technical, scientific and fun topics, I realized something else important.

Software delivery and the act of the producing software – the very thing that all of these women were doing all weekend – is sadly behind the times with regards to the very thing that has the potential to dramatically affect change and improve what drives the world economy. Software needs more machine learning.

I would love to hear from software professionals out there – how do you think machine learning could be applied to improve how software is built and delivered, especially at large enterprises? At Tasktop, we think there is huge potential at all levels, from the practitioner side of things, right up to how a business operates.

In fact, we believe that by focusing on Value Stream Management and the flow of value through the massively complex networks of tools, artifacts and activities required to deliver software at scale, organizations have the opportunity to drastically affect and improve business outcomes.

Click the image to download

By exploring the massive and intensely interesting data that surrounds the act of building and delivering software, we will begin to see patterns in software development that can impede the speed of delivery.  We can understand what team structures and sizes are most effective. We can learn how dependencies between products and requirements increase technical debt.  We can learn what ratios of capacity are going to features, defects and tech debt that are appropriate in different stages of a product’s maturity.  We can understand the effects of collaborative development techniques and their effects on employee and team happiness. The list goes on.

My Sunday afternoon was spent being inspired by creative, focused and data-driven women.  And I’m excited to take that inspiration and apply it to what I’ve been dedicated to professionally for many years – how to improve how software is delivered at scale; one machine learning application at a time.

For more on the powers of machine learning in frenetic world of software delivery, follow @lynnpausic, @GrahamGanssle and @experoinc.

Speak to us today about a complimentary one-hour consultation with one of our value stream experts to help you start visualizing, measuring and optimizing the value streams that exist in your business.

The post What enterprise software delivery can learn from a woman hackathon and machine learning appeared first on Tasktop Blog.

Forget point-to-point: why models are the only way to scale toolchain integration

Mon, 07/30/2018 - 13:48

“Despite the well-documented benefits and criticality of toolchain integration, enterprise-level integration continues to present big challenges for many organizations.”

If there was ever any doubt that enterprises relied heavily on cross-tool integration to thrive, Salesforce’s acquisition of MuleSoft for $6.5 billion in 2018 settled it. Integration is no longer an afterthought or nice-to-have. It is, simply put, a prerequisite for success. And using a model-based approach is the only way to scale operations.

Today’s organizations need information to flow intelligently between their internal systems and to and from partners and suppliers. Integration enables automated workflows, keeps systems in sync, streamlines the flow of work, prevents data discrepancies and eliminates costly human error.

Integration also makes organizations leaner, by cutting out the manual, non-value-adding work (aka ‘waste’) people have to do to keep colleagues and tools in sync. Waste, like entering the same data twice in two systems, slows down throughput and hinders value creation.

In a software delivery organization, integration is particularly important, because no single tool or product suite can support the specialized work of all the varied teams that collaborate to develop and support a product feature of service.

PMOs, product owners, business analysts, architects, UX designers, developers, testers, operations, help desk, security officers – each and every one of them performs a unique job. They need to be supported by specialized tools that make their work possible.

Moreover, sometimes two people in the same company with the exact same job title actually work on entirely different products, supported by a completely different tool stack. That’s just the nature of the beast.

Thus, integration plays a critical role tying all these specialized tools together and making them work like a single well-oiled machine. Information flows automatically from tool to tool, specialist to specialist, “pushing work along” to the next step in the workflow.

In addition, integration enables organizations to aggregate data across lines of business or departments, creating visibility for the managers and executives who need to see the big picture.  

The ongoing challenge of enterprise-level integration

Yet despite the well-documented benefits and criticality of integration, enterprise-level integration continues to present big challenges for many organizations.

The main reason being that the traditional approach to integration – i.e., point-to-point integration between two tools – cannot handle the size and complexity of the work that enterprises are undertaking.

The average enterprise and agency needs to flow rich product lifecycle data across a software delivery value stream that comprises*:

  • 100-1000s of projects
  • In 5-10 core tools
  • Housing 30-40 artifact types
  • With 30-100 fields each
  • And 100s of possible values and states

Model-based Integration is the only way to efficiently synchronize such large volumes of sophisticated data while maintaining its integrity and supporting cross-tool reporting.

But what exactly is model-based integration? And what makes it so superior to point-to-point mappings and templates? The below infographic covers the core basics:

Want to know more?

Download our new e-book on model-based integration to gain a deeper understanding of the power of models. The document expands on the infographic to explain:

  • The importance and benefits of toolchain integration
  • Why point-to-point mapping struggles to scale
  • What model-based integration is
  • How model-based integration works
  • Why models are  superior to point-to-point mapping and templates

Request a highly-customized demo to see how model-based integration can provide a reliable, scalable and easy to use infrastructure to help you grow and optimize your enterprise software delivery.

*Based on Tasktop calculations.

The post Forget point-to-point: why models are the only way to scale toolchain integration appeared first on Tasktop Blog.

What’s new in Tasktop Integration Hub 18.3?

Tue, 07/24/2018 - 13:09

Tasktop Integration Hub 18.3 is available today, introducing the ability to ignore specific errors, receive email notifications for errors and issues, an updated metrics dashboard with model and user ID counts, change detection interval fine-tuning, improved visibility on background jobs, and the new PTC Integrity Lifecycle Manager connector.

Ignore Specific Errors

Tasktop’s Activity screen displays a list of any errors that occurred in your integrations, under the Errors tab. Sometimes, you may decide that a specific error is one you can ignore and don’t need to resolve. You definitely don’t want this error cluttering your error list and affecting the error count.

In version 18.3, we’ve added the ability to ignore certain errors in a given integration. After determining that you can ignore a specific error, you move it to the ‘Ignored Errors’ list. The error will no longer appear in the Errors main list and no longer affect the Error count in the summary banner.

Activities related to ignored errors are not canceled, however, and they will be retried if relevant. If these errors become relevant again, you can access them from the ‘Ignored Errors’ list and stop ignoring them, which will restore them to the main Errors list.

Learn more in our User Guide here.

Email Notifications for Errors and Issues

It’s now possible to receive notifications of errors and issues directly to your Inbox with email notifications. Tasktop admins will receive a digest of all the errors that occurred since the previous email was sent, at an interval you define.

Learn more in our User Guide here.

Models and User Counts Introduced to the Metrics Dashboard

The Metrics Dashboard is where you can see the volumes of artifacts created and updated by your integrations over time. In this new version, we’ve added two additional data points.

First, you can now see the number of artifacts created or updated per model to gain quick insight into what is flowing.

Second, you can now see the number of unique User IDs Tasktop is seeing per integration or repository over time in the cumulative statistics. This data can help convey how many people in your organization are actually benefiting from toolchain integration.

Learn more in our User Guide here.

Change Detection Interval Fine-Tuning

In this new version, the system-wide change detection and full scan intervals can be overwritten for each collection participating in a given integration. This new option allows admins to control the impact of queries on specific repositories. We’ve also set the default full scan interval default to 24 hours.

The change detection interval defines how often Tasktop queries the repository for relevant artifacts that have been modified since the previous query, based on the artifact’s ‘last modified date’.

In full scans, Tasktop queries the repository for every relevant artifact in order to capture changes that don’t cause an update to the artifact’s ‘last modified date’. If this is the case with pertinent changes in one of your collections, you might want to shorten the full scan setting accordingly.

Let’s take the example of an IT organization that has a Jira-to-ALM integration that is also flowing attachments. When a new attachment is added in Jira, the ‘last modified date’ gets updated. However, when an attachment is added to ALM, the ‘last modified date’ does not get updated. Hence, for that specific integration, you may want to shorten the full scan interval for the ALM collection.

Learn more in our User Guide here.

Improved Visibility on Background Jobs

We’ve added a new tab to the Activity screen where Tasktop admins can see the progress of background jobs, for example applying project and domain name changes in ALM, or upgrading to a new Tasktop version.

Learn more in our User Guide here.

New ALM Tool Supported: PTC Integrity Lifecycle Manager

PTC Integrity Lifecycle Manager is an ALM (Application Lifecycle Management) platform that helps teams deliver higher quality, more innovative software and systems with less risk. Business analysts, architects, engineers, developers, quality managers, testers, and other stakeholders use PTC Integrity Lifecycle Manager to collaborate and control the product development lifecycle.

Synchronizing Requirements

Software delivery organizations whose developers use PTC Integrity often express the need to synchronize requirements from a requirements management tool into PTC Integrity, where the requirements can be broken down and worked on by developers. In addition, they often want to flow defects logged in PTC Integrity to the requirements management tool, so the product managers and business analysts have visibility on them.

Other organizations use PTC Integrity for requirements management and want to flow those requirements to another Agile planning tool – for developer implementation, or to a test management tool – where they can be used to design high quality test coverage.

With Tasktop Integration Hub, those integration patterns and many more are now available to PTC Integrity Lifecycle Manager users.

Tasktop synchronizes requirements and defects to and from PTC Integrity and the rest of the software delivery toolchain.

The following demo video demonstrates an integration between Jama, a requirements management tool, and PTC Integrity – used here by developers for Agile planning and defect tracking. This integration serves to improve collaboration between the product team and the developers, eliminate duplicate data entry between systems and improve product quality and traceability.

Learn more in our User Guide here.

Terminology and Graphical Interface Changes
  • “Work Item Synchronization + Container Mirroring” integration template has been renamed “Container + Work Item Synchronization”
  • Leankit connector has been renamed Planview LeanKit, with updated icon
  • VersionOne connector has updated icon
  • No labels

The post What’s new in Tasktop Integration Hub 18.3? appeared first on Tasktop Blog.

How Value Stream Management can help CIOs transform their business

Mon, 07/23/2018 - 13:15

Please note: the author of this blog and star of the video, Adam, is a fictional character based on the real experiences of CIOs and IT organizations that Tasktop has spoken to and/or worked with.

If you’re an under-pressure CIO who is struggling to transform their IT organization despite your best efforts, you’ve come to the right place. Because I’ve been there too. 

I know all too well that the Age of Software is a major threat to my company’s prosperity, and that our future existence rests largely upon my shoulders and my IT team. It’s tough out there for CIOs. Our profession can often appear unenviable, especially when you follow the transformational playbook to a T and still fail to see any tangible gains.

Time isn’t our friend, either. Every day counts. General Electric dropping out of the Dow Jones Industrial Average – the bellwether of the U.S. economy – was a stark reminder that no one is safe in the Age of Software. 

The demise of GE –  the last 19th century member of the famous index and a giant of the industrial age – sent a clear message to enterprises across all industries; that no one, I repeat no one – is safe from digital disruption. Not that we CIOs need that reminder…

We’re fully aware of the mammoth task at hand and what’s at stake – we know full well that if we don’t accelerate our organization’s transformation sooner rather than later that we will be in the 50 percent of CIOs that Gartner predicts will be out of job by 2020 if they fail to transform their teams’ capabilities.

And even if we start seeing benefits of our transformation, are we set up to keep the momentum of that transformation going? After all, as Courtney Kissler (VP Digital Platform Engineering, Nike) reminds us in the foreword of the book Accelerate: Building and Scaling High Performing Technology Organizations, a transformation isn’t a program, it’s a “learning organization” that’s “never done.”

Now, while there’s many parts of the IT organization that have rightfully deserved our ire at some point or another, there’s one area that more than any other that used to drive me up the wall – and that was our software delivery.

I poured huge amounts of resources into accelerating and scaling our operations to little avail. Our customers continued to complain that IT wasn’t delivering value fast enough. In the meantime, startups and digital-native companies continued to grab their slice of our market share. I felt like I was being pulled in all directions.

Why were my investments into our Agile and DevOps transformations failing? Why was the time to value (TtV) of our products still unpredictable, unmeasured and far too long? Why couldn’t I see how business value was flowing across the software delivery process? Where were the bottlenecks? The waste? The opportunities for process improvement? Why couldn’t I bridge the gulf between IT and the business?

As you can see in the below video, it turns out that I was approaching software delivery the wrong way. What I needed was a framework and system that focused on the flow of work and value across the IT organization:

If my journey resonates with you, please sign up for my educational newsletter. Over the next few months, I will be sharing tips on best practice, informative content, upcoming webinars, and highlighting other events that will help any IT organization to transform their software delivery from weakness to a strength to generate more value for the business.

Learn more about Value Stream Management

You can learn more about Value Stream Management by clicking on images below:

 

Getting started – speak to our value stream experts 

Speak to us today about a complimentary one-hour consultation with one of our value stream experts to help you start visualizing, measuring and optimizing the value streams that exist in your business today.

The post How Value Stream Management can help CIOs transform their business appeared first on Tasktop Blog.

Announcing ‘Project to Product’ book and the Flow Framework

Wed, 07/18/2018 - 15:13

I’m delighted to announce that my book Project to Product will be released at the DevOps Enterprise Summit 2018 (Las Vegas), on Oct 22, 2018.

As startups disrupt every market and tech giants pull further ahead of entrenched businesses, the majority of enterprise IT organizations are facing an existential crisis. Either they quickly become much better at software delivery, or they risk becoming a digital relic.

Mastering large-scale software delivery will define the economic landscape of the 21st century, just as the mastery of mass production defined the landscape in the 20th. Unfortunately, business and technology leaders are woefully ill-equipped for the Age of Software because they are using management paradigms from past technological revolutions. While technologists adopting DevOps and Agile have already made the transition, the gap between modern technical practices and the business has only widened in the process. We need a new approach in order for the majority of the world’s organizations to thrive in this new age.

‘Project to Product’ will be released at the DevOps Enterprise Summit 2018 (Las Vegas), on Oct 22, 2018. You can pre-order now by clicking on the front cover.

Project to Product provides leaders with the missing framework needed to create a Value Stream Network — the technology equivalent of an advanced manufacturing line that comprises thousands of IT professionals.

Leading up to the publication of the book in October, every two weeks I will be blogging some of the core concepts that lead to the creation of The Flow Framework, as well as highlighting conference presentations, book signings and other events I will be involved in.

If you are interested in following this journey, please sign up to my newsletter and stay in touch.

Visit the IT Revolution website for more about my book, where you will also find a host of other compelling books about best practice for IT organizations.

The post Announcing ‘Project to Product’ book and the Flow Framework appeared first on Tasktop Blog.

How leading IT organizations are using Value Stream Integration to generate more business value through software delivery

Tue, 07/17/2018 - 08:36

Earlier this year, Tasktop analyzed 300 value stream diagrams of the largest U.S. enterprises across major industries such as financial services and healthcare to better understand how organizations are delivering software at scale. We wanted to know exactly how global leaders were successfully combating the threat posed by digital-savvy businesses and continuing to be innovative leaders in their field.

While Value Stream Integration was playing a key role in helping them manage and improve their software delivery value streams, we wanted to go deeper than that; beyond the sheer productivity benefits of say, flowing defects between the tools used by development and test teams. We wanted to better understand the business value they were generating through integration.

Through this research, we identified some striking similarities between these global heavyweights in terms of mindset and approach to software delivery.

Leaders are doing things differently…

  1. They’re looking beyond Agile and DevOps and thinking end-to-end to optimize how work (value) flows from customer request to operation and back through the customer feedback loop
  2. They’re defining, connecting and managing their Value Stream Network through integration and investing in Value Stream Management
  3. They’re building a modular infrastructure, enabling them to plug teams and tools in and out with disrupting existing workflow
  4. They’re focusing on end-to-end flow time metrics to identify bottlenecks and efficiency opportunities to optimize the process
  5. They know there is ‘no one tool to rule them all’. They recognize that they need embrace an integrated best-of-breed tool strategy that provides specialization at each key stage of the process, and works together as one cohesive system.
Download our white paper on optimizing Jira by clicking on the above graphic. What tools are leaders using?

There are some excellent specialist tools out there that address every facet of the software delivery process. Some of these tools are mature legacy tools, while others are new to market that are equally as popular and powerful.

Some of the most popular tools used by leaders include:

Mature

  • Micro Focus (HPE) ALM (Test Management)
  • Microsoft TFS (Agile Development)
  • CA Clarity PPM (Project Management)
  • IBM RTC (Agile Development)
  • BMC Remedy (ITSM)
  • IBM DOORS NG (Requirements Management)

New

  • Atlassian JIRA (Agile Development)
  • ServiceNow Service Desk (ITSM)
  • CA Agile Central (Rally) (Agile Development)
  • Blueprint (Requirements Management)
  • Jama (Requirements Management)
  • Tricentis Tosca (Test Management)
What artifacts are leaders flowing?

Leaders recognize that collaboration between practitioners is the linchpin of their software delivery. They also recognize that artifacts are the “currency of collaboration” and that focusing on how this collaborative data is enriched and flowed between tools is critical for faster, better product development.

The most popular artifacts used by leaders highlight the most important stages of the process:

  • Story – descriptions of features of a software system
  • Epic – a chunk of work that has common objective
  • Ticket – information that relates to an issue or problem from the field
  • Defect – a bug, failure or flaw
  • Test Case – a set of conditions or variables that determine if a system satisfies requirements
  • Requirement – what the user expects from a new or modified product
  • Feature – the functionality of a software application
What are they connecting?

Leaders are implementing a similar set of integration patterns. These patterns are frequent and critical interactions between collaborators and tools (via artifacts) at key stages of the software delivery process.

We’ve come a long way from just fixing bugs in code. While the first pattern – and still most popular – is the developer-tester alignment (via defects), we’ve identified a number of integration patterns across our customers value streams that dramatically improve the efficiency of their workflow.

The growth of these patterns reflects the evolution of software delivery as it becomes increasingly more complex and sophisticated as more roles, tools, workflows and artifacts emerge, intertwine and depend on each other.

More patterns will continue to emerge as organizations seek to improve efficiency across the process to make it more manageable and effective, and map their integrations to business drivers. While end-to-end integration must be the end goal as it encompasses the whole value stream and connects that all-important network, you may not be as far behind the leaders as you think.

Our research into the number of tools that organizations are integrating found:

The number of tools that respondents are integrating. Types of integration patterns

Below are the 11 common integration patterns that leaders are using. The sophistication of these chained patterns has grown significantly over the last five years:

1) Project Portfolio Management – Requirements Management Alignment

Why: Brings the people who manage product workflow closer to the people who understand what the customer needs from the product.

2) Requirement Management – Agile Planning Alignment

Why: Brings the people who understand what the customer needs from the product closer to the people who build it to ensure it delivers value.

3) PMO-Agile Plan Orchestration

Why: Brings the people who manage the products closer to the people who build them to ensure that development is on schedule and on budget.

4) Developer-tester alignment

Why: Brings the people who build the product closer to the people who test the product to reduce defects in production.

5) Help Desk Feature Request

Why: Brings the people who log customer product feature requests closer to the people who resolve them.

6) Known Defect Status Reporting To Help Desk

Why: Enables the people who build the software keep the people who work closely with the customer aware of any known issues (defects) going into production

7) Help Desk Incident Escalation

Why: Brings the people who log customer product issues closer to the people who fix them.

8) Requirements Traceability

Why: Traces the product journey from the people who plan and design the software to the people who build it. This traceability allows all stakeholders to understand a product’s development across key stages to increase product accuracy, speed of delivery, and helps identify where any issues originated for faster time-to-resolution.

9) Requirements Management To Test Planning

Why: Brings the people who test products closer to those who know what the customer needs to ensure test coverage meets any strict regulations or compliance issues.

10) Supply chain integration

Why: Brings all stakeholders involved in the process from outside the organization closer to those inside the organization to ensure consistency of information, compliance of process, and better supply chain collaboration.  

11)  Consolidated Reporting

Why: The holy grail of reporting is to obtain “one source of the truth.” Yet, it’s so difficult to get this view when critical information relating to a product’s development is siloed in different tools. Integration aggregates all data into one database that can be used for reporting purposes.

Common characteristics of leader success stories

Challenges

  • Multiple best-of-breed tools, with no streamlined processes
  • Poor visibility into the end-to-end flow of work, making it difficult to measure and improve performance
  • Regulated industries that require traceability for safety-critical software that is appropriately tested
  • Degraded productivity due to manual work, duplicate entry, and collaboration through email, spreadsheets and status meetings
  • Disruptions via acquisitions, mergers, and reorganizations

Benefits

  • Enhanced efficiency and coordination
  • Improved visibility and traceability
  • Future-proof infrastructure to adapt to evolving business needs

Key takeaways

  • Value Stream Thinking is vital to the success of Agile, DevOps and other IT transformations
  • Enterprises with connected value streams are thriving
  • A connected value stream is key enabler in the shift from managing software projects to delivering products and business value
  • A sophisticated integration infrastructure is required to bring the value to life
Learn more

For a more in-depth analysis into the research, watch the below webinar featuring Nicole Bryan (our VP of product) and Chandler Clemence (Product Analyst), where they share the results of an analysis of 1,000 tool integrations to learn:

  • How IT tool integration accelerates enterprise software delivery
  • How to implement 11 popular tool integration patterns
  • Strategies to reach integration maturity through chained integration patterns
Click on graphic to watch webinar How Tasktop Helps

Tasktop’s Value Stream Management solution, underpinned by our pioneering model-based approach to integration, provides customer with a host of benefits including:

Automates Information Flow Across Value Stream

  •       Enables the frictionless flow of artifacts, as well as information from events across the value stream
  •       Removes non-value-added work and bottlenecks
  •       Increases velocity and capacity
  •       Provides automated traceability
  •       Dramatically improves employee satisfaction (no manual handoffs etc.)

Enables Value Stream Visibility

  •       Provides real-time view of product statuses
  •       Unlocks lifecycle activity data from separate tools
  •       Automatically compiles data into single database
  •       Enables management to create dashboards and reports for holistic view of value stream

Creates a Modular, Agile Toolchain

  •       Enables organizations to use products that best support each discipline
  •       Drives more value from each tool
  •       Easily add, replace and upgrade tools (ideal for mergers and acquisitions, and restructuring)
  •       Creates proactive environment for innovation

With this connected network, organizations can finally see, manage and optimize one of the most important processes in their business; the engine that drives their prosperity in a digital-centric world. You cannot underestimate how important it is to have this network, this complete system. This is the state where innovation thrives, and where continuous improvement can be executed. And, of course, where you gain the essential visibility to be confident you’re always building the right products to drive your business like a leader.

Getting started – how to become a leader

Speak to us today about a complimentary one-hour consultation with one of our value stream experts to help you start visualizing, measuring and optimizing the value streams that exist in your business today.

The post How leading IT organizations are using Value Stream Integration to generate more business value through software delivery appeared first on Tasktop Blog.

12 KPIs to help you improve the quality of your software delivery

Mon, 07/09/2018 - 08:55

The mad rush to deliver software faster is a major threat for an organization’s quality control and brand integrity. QA and test teams are under pressure like never before to ensure that software products are always functional, reliable and delivering value to end users. If it goes wrong, you can bet your bottom dollar that test managers and their team will be first in the firing line from up high.

The velocity and volume of work isn’t the only issue, either. It’s how the work flows. A software delivery value stream comprises multiple stages underpinned by a network of teams, tools and processes. All these touch points and routes that can disrupt and contaminate the flow and damage a product’s quality. Proper orchestration of this network and how work flows across the value stream is key to creating an effective end-to-end testing infrastructure.

As Matt Angerer, our pre-sales architect explains in his article for SD Times, more testers and automation isn’t the answer. Sure, test automation is a critical component to your overall testing strategy, along with having the right team of QA Analysts and Testers. But focusing on adding more testers to increase coverage, or automating just for the sake of automating can create unnecessary overheads in your value stream.

To remain lean, Agile, and adaptable — you need to closely examine and measure your data points. “The answer,” he writes, “is in the data.” Matt goes on to propose 12 KPIs to track that can help you unlock the full potential of your QA organization:

  1. Active defects
  2. Authored tests
  3. Automated tests
  4. Covered requirements
  5. Defects fixed per day
  6. Passed requirements
  7. Passed tests
  8. Rejected defects
  9. Reviewed requirements
  10. Severe defects
  11. Test instances executed
  12. Tests executed

By understanding the indicators of quality, you can better position your people, adjust your processes, and decide whether you have the right enabling technology in place to improve upon quality while accelerating velocity. Most organizations will make adjustments before closely examining and measuring these KPIs over the course of time. The key is to understand and document the trends that occur within teams, within projects, and within products. By understanding and documenting QA trends, a QA Leader is better able to pivot his/her team accordingly deliver in lock-step with the rest of the IT organization.

Read on: Unlock the full potential of a QA organization by tracking these KPIs

How Value Stream Integration improves Quality Management

In many organizations, it’s up to the testing and QA teams to declare whether an application is ready to ship and deliver value to customer. In order to make that critical decision, they need real-time information from across the toolchain to access the health of a product. Value Stream Integration helps flow that critical information across tools to improve Quality Management. Check out the white paper below to learn more:

Click on image to download.

 

 

 

 

 

 

Want a more personal touch? Request a highly-customized demo of how Tasktop can help you connect your end-to-end value stream to help you to measure, improve and optimize your enterprise software delivery.

The post 12 KPIs to help you improve the quality of your software delivery appeared first on Tasktop Blog.

Pages