Subscribe to Tasktop Blog feed Tasktop Blog
Connecting the world of software delivery.
Updated: 1 hour 32 min ago

Software-Defined Technologies: Transforming the Value Stream

Thu, 03/23/2017 - 11:00

Software-defined is a concept that refers to the ability to control some or all of the functions of a system using software. The concept is sometimes incorrectly characterized as a buzzword or marketing jargon, when in fact it has a clear meaning that needs to be understood by organizations looking to keep pace with change.

When technologies become software-defined, there are major systemic benefits for organizations that use them, including lower costs, higher quality products and services, and less risk.

At the same time, software-defined technologies require major organizational changes for incumbent enterprises to adopt and use effectively. This often involves expensive and risky transformation projects that reengineer the value stream to take advantage of decoupled components, reduced dependancies and new management capabilities.

Today we will look at the origins of the “software-defined” concept and how its application presents both opportunities and challenges to the enterprise.

The beginning: ‘Software-defined Radio’

The software-defined concept comes to us from the evolution of radio transmission technology. A traditional radio communications system uses physically connected components that can only be modified through physical intervention. The antenna connects to the amplifier, which connects to the modulator, and so on. Operators are locked into the specifications of the components, the order in which they are connected, and whatever controls they expose. It’s an extremely inflexible technology and changes are best done by simply buying a new system.

As you can imagine, for businesses that operate large-scale radio deployments such as wireless telecom providers, technology decisions are hugely impactful. They can last decades and demand large upfront planning and capital costs. Keeping pace with change is extremely expensive and difficult.

Base Transceiver Station

In the mid-eighties however, researchers began to take specific components of the radio and make them digital, implementing functions like oscillators, mixers, amplifiers and filters by means of software on a computer. By emulating these functions in software, the system becomes adaptive and programmable, and can be configured according to the needs and requirements of the operator, rather than the specifications or the manufacturer.

In 1995, the term Software-Defined Radio (SDR) was coined to describe the commercialization of the first digital radio communication system, and this development changed the way these services and products can be delivered.

On the technical side, in becoming software-defined, many functional limitations are removed from radio systems. For example, by simply reprogramming the software, a device can have its frequency spectrum changed, allowing it to communicate with different devices and perform different functions. This has enabled a quick succession of technical advances that were previously the domain of theory and imagination, like ultrawideband transmission, adaptive signaling, cognitive radio and the end of the “near-far” problem.

On the business side, the changes are equally profound, having a significant impact on the value stream of enterprises throughout the wireless and radio industry, and the industry itself. A wireless telecom provider employing software-defined radio can easily add new features to its network, adapt its systems to take advantage of new spectrum bands, or reconfigure itself when a new handset technology like LTE 4G becomes available. A telecom provider able reconfigure its infrastructure by deploying updates to software rather than by buying new hardware can take advantage of huge operational savings while eliminating capital expenses.

SDR therefore provides significant strategic advantage to these businesses, introducing adaptability, modularity and agility to the organization where it was previously rigid and inflexible.

Taking advantage of SDR, however, is a long, transformational process, needing a lot of capital and a significant departure from the status quo. Not only does it require changing all infrastructure over to the new technology, but it also requires the business to think differently and reengineer the value chain to take advantage of the new capabilities.

Software-defined Infrastructure

The IT industry has also been deeply impacted by the advent of software-defined technologies. The following examples have created industries and enabled a generation of evolved products and services:

  • Hypervisors – A hypervisor is an operating system that runs virtual machines, like VMWare ESXi or Microsoft Hyper-V. It runs directly on the physical machine, abstracting and distributing the hardware resources to any number of virtual machines. This has undoubtedly been one of the largest and most impactful advances in IT in the last 20 years, ushering in the era of point and click server deployment and changing the way we manage and deliver IT services.
  • Software-defined Networking (SDN) – Traditionally, operating a network means managing lower level infrastructure that allows devices to connect, communicate with each other, and figure out where to send their packets. These switching devices – called “layer 2 devices” – each need to maintain their own state and configuration information, and make decisions about how to route packets based only on limited, locally available information. SDN abstracts layer 2 networking, and is the ‘secret sauce’ behind cloud computing – a critical functionality for all public cloud services including AWS, Azure and OpenStack-based providers. It allows the service provider to centralize routing and switching, and provides the orchestration capability required for large-scale multi-tenancy i.e. the ability to create and manage millions of logically isolated, secure networks.
  • Network-function virtualization (NFV) – Building upon SDN, NFV allows services like load balancers, firewalls, IDS, accelerators, and CDNs to be deployed and configured quickly and easily. Without NFV, to operate infrastructure at scale you would need a lot capital investment and an experienced team of highly specialized network engineers. NFV makes it easy to deploy, secure and manage these functions without having to understand the complexities underneath the hood.

“Software-defined” Defined

Having looked at where the concept came from and a few examples of modern software-defined technologies, I propose the following definition for what it means to be “software-defined”:

Software-defined means some or all of the functions of a system can be managed and controlled through software.

Some key attributes of a software-defined technology:

  1. The functions are abstracted
    • Software-definition strives to have stateless functions i.e. functions that do not maintain their configuration or state themselves. State and configuration information is maintained outside the function, i.e. in the software. By decoupling the state and configuration from the function and centralizing it, we gain adaptability, resilience, and the benefit of visibility at scale.
  2. Software controls functionality
    • No direct operator or human intervention is required for the function to operate – functions are managed solely through software. Management and administration are therefore decoupled from the function. We gain the ability to automate processes and activities, and manage the system independently from functional limitations.
  3. Functional components are modular
    • The software layer operates independently from any dependency on functional components. This means the functional components can be commoditized, modular and scalable. We can easily change or replace these components without disrupting the system.

 Adoption through Transformation

On the face of it, software-defined technologies are better-faster-stronger, and companies that use them will have a competitive advantage over those that do not. They lead to lower costs, higher quality and less risk for the business. Organizations building products and services that leverage these technologies can use them to disrupt incumbent enterprises.

For those enterprises, however, especially those locked in the middle of a legacy lifecycle, software-defined technologies present a significant challenge. Adoption requires rethinking the value stream and integrating with legacy systems. As stated in the book Lean Thinking,

“We are all born into a mental world of ‘functions’ and ‘departments,’ a commonsense conviction that activities ought to be grouped by type so they can be performed more efficiently and managed more easily” (p. 25)

Advances in technology has been able to provide solutions to problems, for large enterprises software-defined technologies are the problem. Not only do they present a threat to the business in the hands of startups, they also explicitly change the way functions and activities are organized, operated and manage. They demand rethinking how, where, when and by whom functions should operate in the value stream. They give startups the ability to disrupt entire industries while only creating waste for organizations that try to adopt them without first undergoing a transformation.

Not only do enterprises therefore face the challenge of developing competencies with new software-defined systems, they also face the challenge of changing their culture, reorganizing their team structures and reengineering their value stream. People in these organizations will need to be highly flexible and open to a paradigm shift in how they think about their work, their roles, and their activities in the value stream. Adoption needs to be driven through large-scale (and risky!) change and transformation projects.

In my next article, we will look at the next generation of business transformation projects – digital transformations – and see how the approach to deploying software-defined technologies in the value stream impacts success.

Putting Data Into Context: The Power of the Right Information at the Right Time

Tue, 03/21/2017 - 09:25
Part of a series of reviews on industry events, this blog talks about Tuuli’s reflections on Swiss Testing Day and the IBM CE event in March 2017.

Have you ever received an email and thought: “Why am I receiving this email? What’s the context?” It’s likely that someone had a discussion outside the email thread and with the best intentions, sent out an email to everyone they thought were affected by the topic of conversation, but forgot to reference relevant material in the email.

Receiving information without context is not only frustrating, but can sometimes have undesired implications. Let’s imagine someone had a war room discussion about a major incident that affected a CRM system, and for it to be fixed, you had to apply a fix from a vendor. Without knowing the context and thus the possible fix, the poor owner of the CRM system not only receives complaints from their team, but also customers, which often get escalated to management very quickly.

Last week I had the pleasure of attending Swiss Testing Day in Zurich and presenting at an IBM Continuous Engineering Symposium in London.

Though the subjects were fairly different, two clear themes emerged: the importance of data relationships and traceability. You can see how both of these derive from the same challenge of data context. Below I’ve broken down the two themes to explore these further:

Data Relationships

  • Helps understand connections.
  • Helps decide what action / non-action to take: Who needs to know? What is the importance of this information? Who is/ will be impacted? Who can I go to to ask more about this? What dependencies are there?
  • Julius Bär’s presentation mentioned this a number of times.

Traceability

  • Presenting information in light of historical data.
  • It helps answer difficult questions such as: What happened before this information was sought? What triggered it? How can we prevent it from happening again?
  • What is the predicted risk? What may happen in the future based on the history?
  • It not only helps with audits, but also helps understand the story behind the data.

And all of this context boils down to one thing: It allows you to make informed decisions. But remember, all the information presented is up to interpretation, and the action or lack of action is up to the human reading and interpreting the data.

In our first example of the broken CRM system, the system owner may have prevented the issue had they been informed about the known error and vendor fix beforehand. They could have then taken action without having to deal with the issue and customer satisfaction at the same time.

Tuuli with Tasktop partners Informetis AG and Tricentis at Swiss Testing Day 2017.

Value Steam Automation = Superior Software Delivery

Mon, 03/20/2017 - 11:16

Automation isn’t a new trend in IT, but its increasing influence continues to dramatically transform business-critical processes including software delivery. In a time where speed and productivity can make the difference between the failure and success of a project – and provide that all-important competitive edge – automation is crucial in vanquishing repetitive tasks that are slow, onerous and susceptible to human error.

The quality of software and the speed with which it can be delivered relies on the real-time flow of information between the people, teams and tools across the lifecycle. That’s why the most successful organizations are prioritizing value stream integration to automate the flow of information across their software lifecycle.

Without value stream automation, stakeholders are forced to use manual means of sharing this information such as exporting data from one tool and importing into another, manually re-keying this information or sharing it during wasteful status meetings. These tasks are time-consuming and a perpetual drain on productivity, as well as undermining the accuracy and quality of the end-product and/or service.

Fortunately, value stream automation eradicates these issues, ensuring the automated flow of information such as artifacts (i.e. defects, user stories, trouble tickets), as well as information from events (such as build features, version control changesets, security scan vulnerabilities and performance monitoring alerts).

By automating the value stream, you create a frictionless flow of information across a unified software development and delivery lifecycle that is seamless and waste-free, helping organizations to increase their teams’ effectiveness and empowering stakeholders by:

  • Automating handoffs across people, teams and tools
  • Providing real-time updates of status, comments, attachments etc.
  • Enhancing collaboration through in-context communication
  • Removing non-value added work, bottlenecks and wasted time
  • Increasing velocity, reduces errors and rework
  • Enabling automated traceability and visibility
  • Enjoying productivity-related savings up to $10 million (based on Tasktop calculations for a 1500-person software development and delivery team)

By integrating your software delivery value stream with Tasktop, you can transform how you plan, build and deliver software. We’ve done this for some of the most successful companies in financial services, healthcare, retail and automotive – including nearly half of the Fortune 100.

Speak to us today to see how you can integrate your value stream, automate the flow information and greatly improve your software delivery capabilities.

How Value Stream Visibility Enhances Software Delivery

Wed, 03/15/2017 - 11:26

A software development and delivery lifecycle typically comprises many people, teams, tools and disciplines. Often the data within these multiple components is siloed and visibility across the value stream is poor (or even non-existent). Ultimately this means the quality and speed of the software delivery suffers, IT projects fail and Agile and DevOps transformations struggle to scale.

None of the popular best-in-breed software tools provide automated traceability across the value stream from ideation to production, meaning critical activity data is only reported in those individual tools. As a result, IT leaders have a fractured view into the health of their software delivery, inhibiting them from detecting patterns, spotting inefficiencies or tackling bottlenecks.

If tools within the value stream are not integrated, then there is no end-to-end visibility into the evolution of an artifact as it moves through the lifecycle. The lack of a holistic overview means it’s very hard to ascertain how the artifact has evolved across the disciplines and tools being used in the project, so the context of the artifact and the semantic understanding is lost.

However with Tasktop, whenever any of the artifacts change in any of the connected tools, this activity data is streamed to a centralized database. From there, data can be manipulated and visualized using standard business intelligence tools by stakeholders involved in the lifecycle. This provides the basis of a comprehensive metrics and governance programs, including:

  • Visibility into the full lifecycle of software development and delivery from ideation to production
  • Data for real-time insight into the state of application delivery and value creation
  • Consolidated metrics and KPIs for management, optimization and transformation
  • Automated traceability across the entire lifecycle

The data helps organizations to:

  • Identify bottlenecks in the value stream
  • Automate the creation traceability reports for governance programs
  • Obtain a consolidated view of the status of application delivery
  • Merge application delivery metrics with financial reporting data to determine the true cost and benefits of IT initiatives

Tasktop provides this data and enables end-to-end visibility, traceability and more. For more information, see our brand new website and contact us today to discuss how we can give you complete visibility into your software lifecycle for optimized decision-making that will help you drive your Agile and DevOps transformations.

 

A Day Without Women

Mon, 03/13/2017 - 10:54

This past Wednesday was International Women’s Day. In conjunction, many women participated in a “Day Without Women” protest.

I know in the age of social media, we’ve already moved on to the next big thing. The articles are written, all the tweets are twitten. But I wanted to take a minute to give a guy’s opinion. Oh, and by the way, this is meant for the guys out there. Women, feel free to skip this post. You know this already.

It honestly feels a bit odd to write about this. As a guy, it’s easy for me to fall into one of three camps, 1) the good intentioned, but misguided mansplainer, 2) the troll asking why we don’t have a ‘Day Without Men’, or 3) the silent ally.

The first group feels like they’re doing good by jumping out in front of the movement and proclaiming what women should do. It’s hard to fault these guys, but it’s patronizing and implies that women don’t have the ability or autonomy to act and think independently. And while it may feel good, I’m not sure if it actually helps.

We can skip right over the second category.

I happen to think there’s a whole lot more to the third category than we’d like to admit. This group supports the Day Without Women cause. They’re 100% behind their colleagues striking and nod in agreement during happy hour when the issues of women’s rights and gender equality come up. These are ‘the good guys’, but these are the guys that don’t do anything. They’re not blocking the movement, but they’re not advancing it either.

Here’s the catch…I don’t want to belong to any of those groups.

I want to be more than that. The silent allies have no skin in the game. They have no voice.

Here’s my little chance to speak out. To take just a little risk by writing about what I saw. It’s not much, but it’s better than sitting on my butt doing nothing.

A Day Without Women..

I woke up Wednesday not realizing that there was a women’s strike about to happen. Only after checking my social media feed did I remember.

To give some background, my department consists of three men and three women (one of which is my boss). A few weeks ago, my boss told us she was participating in the strike and all of the women were encouraged to participate as well.

So Wednesday came, and while my boss did in fact take the day to protest, neither of the other two women did. They both had work responsibilities that needed to be attended to right away. One took part of the day, but the other worked a full day. Another women who used to be on our team was also working that day.

I came home and talked to my wife. She’s a Product Manager at another software company. It was a completely normal day at her office. All of the women were still working. Coincidentally, she had one meeting that consisted of all women. This is just one example of how vital women are at her company.

Do I think the women on my team didn’t take the day off because they’re overworked or put upon? No. I think they went to work because they know their contributions are important and they were needed at their jobs that day. But it’s what another Tasktop woman said to me that provided a fresh perspective.

She told me that she worked, not because she didn’t agree with the strike, but because she feels supported here. She feels that our company has been good to her, supports women’s equality and needed her that day.

Some of my women colleagues who worked that day, supported the strike in different ways—wrote blogs, refrained from using their purchasing power that day and/or contributed to organizations that fight for women’s rights.

It was interesting for me to hear. Multiple women at Tasktop with strong convictions about women’s equality taking different paths all leading to the same gender equality goal.

Can’t be done

Because I’m a 40 year old male, this type of social issue is not typically at the forefront of my thoughts. I thought we were pretty much past this. Obviously, I was wrong.  Before the International Women’s Day, my company sent out a request for employees to answer the question “Why do you feel having women at Tasktop and/or in STEM is important/positive?”  You can see some of the replies in the subsequent blog post The Importance of Women in STEM. When I opened that email, I’ll be honest, I thought it was a bit silly. Why? Because I couldn’t imagine that anyone wouldn’t know that women are a valuable part of the workforce.  Silly because I couldn’t believe that there are people out there who think the US economy could survive if the workforce reverted back to what it looked like in the 1950’s.

The simple answer is that Tasktop would not be as successful without women. And it’s not because they’re women. It’s because they’re smart, talented, and driven people. It’s because they’re the right people for the job. Full stop.

It seems to me that limiting yourself and your company to half the workforce, half the world, is simply a very bad idea. Women at Tasktop have designed our product, they’ve built our product, they’ve marketed our product and they’ve sold our product.

So while I didn’t march on March 8th, or take the day off, I’ll be doing my best from now on to be more than a silent ally.

The Importance of Women in STEM

Wed, 03/08/2017 - 08:36

On March 8th, Tasktop joins organizations and individuals around the world in support of International Women’s Day. International Women’s Day is celebrated globally to bring together women, men, and non-binary people to lead within our own spheres of influence by raising awareness and taking action to accelerate gender parity.

Said best by Tasktop VP of Product Management, Nicole Bryan, in her blog, Role Model Ladders: A Concrete Path to Getting More Women in Technology:

“Through small but intentional steps, we can change things.”
At Tasktop, 35% of employees and 40% of Tasktop’s Management team are female. Not only are we changing the world of software development and delivery, we’re doing it within an environment of diversity and inclusivity. Our teams recognize the importance of diversity within the workplace and outside of the workplace.

Tasktop President, Neelan Choksi, serves as a trustee at TechGirlz, a non-profit working to get adolescent girls excited about technology. In addition, the Tasktop engineering team is spearheading a Technovation Challenge designed to help give girls around the world the opportunity to learn the skills they need to emerge as technology entrepreneurs and leaders.

As current leaders in technology experiencing the benefits of diversity firsthand, many Tasktop employees shared their thoughts on why having women at Tasktop and in STEM is important:

“From co-founder to every team and level of management, Tasktop has thrived by actively seeking and fostering an environment for women in tech.  The diversity in thinking and problem solving produces better innovations and better business results.  As high tech businesses continue to become more complex and more creative, the companies that are enlightened to this will outperform those who are stuck in the stone age of boys’ clubs.”
Dr. Mik Kersten, Tasktop Co-Founder and CEO |

“Having diverse teams at Tasktop enables a collaborative environment where voices with different experiences combine to create software that considers a problem from multiple perspectives. And it is a lot more fun to work in a diverse workplace!”
Gail Murphy, Tasktop Co-Founder and Chief Scientist |  

The more ideas you have to choose from, the better the result will be.  Each person with a different background, a different outlook, a different approach to life brings different ideas to the table.  Diversity drives innovation by starting with more points of view represented.”
Dawn Baikie, Tasktop Senior Software Engineer |  

“You wouldn’t voluntarily walk through life with one arm tied behind your back, would you? Regardless of who you are, we all have our own experiences inside and outside of work, and it’s these individual experiences that provide diversity of thought, sow the seeds of creativity and drive innovative ideas. Tasktop, STEM and wider society are all greatly enriched by the input, output and presence of many talented women, many of whom I’m proud to call my colleagues, friends and family.”
Patrick Anderson, Tasktop Content Specialist |  

“For the first time in my career, I’m on a team with a majority of women. Even though gender does not matter – I happily work with both women and men – it’s great to work for a company who promotes equality (be that gender, race, background, disability, or other). Where [gender] equality is known to correlate with income, education, political empowerment, and health, I can’t help but think that diversity and equality within an organization results in stronger financial results for companies.”
Tuuli Bell, Tasktop Partner Account Manager, EMEA |  

“Tasktop has women working in technical roles across the company. Seeing the work they do has broadened my conception of what it means to be a woman working in STEM. I realize that no longer is it just about coding (though that is important too). My experience has shown that all levels and teams from a company benefit from women who exercise their analytical and critical thinking skills and combine them with their other unique abilities.”
Cynthia Mancha, Tasktop Product Manager | 

“It is important to me to have women at Tasktop and in STEM because I want my kids to think of gender parity as we think of women’s suffrage, something that’s not to strive for, but something you are confused about how it could have ever been in question.”
Thomas Ehrnhoefer, Tasktop Senior Software Engineer | 

“Diversity is a major driver of creativity and innovation. Years ago women were criticized for not behaving more like men in business. Now we know that embracing the differences in the way we think and make decisions drives innovation and business success.”
Joyce Bartlett, Tasktop Marketing Director |  

Women represent nearly half of the workforce in the US today, but only a quarter of the jobs in STEM. The more we encourage women’s interest and passion​ in the fields of ​science, engineering, math and technology, the more we will see our perspectives and needs represented in society.  To be an agent of change is help others visualize what’s possible.”
Emily Kelsey, Tasktop Regional Sales Manager |  

“Because it’s silly we still have to have this conversation. Women are half the workforce. Not only is it the good & right thing to do, it’s a competitive advantage. Do it to be a better company.”
Trevor Bruner, Tasktop Product Manager |  

“There is rarely a week that goes by where I don’t contemplate how fortunate I am to be a Tasktopian. It’s not just that the company is doing interesting and important things, but it’s how we do it. Earlier this week, our CEO (Mik Kersten) made special effort to remind his team that Tasktop’s culture is a place where gender equality and justice run deep. He did so in support of those within the company that felt they wanted to support the “A Day Without a Women” initiative. At Tasktop, I am fortunate to work in an inclusive environment where diversity is a hallmark; where it’s recognized that the diversity in our thoughts, coupled with unity in action, leads to a stronger, more innovative company. “
Betty Zakheim, Tasktop VP of Industry Strategy |  

“It is important to have women as part of an IT organization to provide a range of critical thinking skills to their male counterparts. Including women creates a 360-degree view of challenges and possible solutions for a world that increasingly requires creative problem solving.”
Beth Beese, Tasktop Business Development Manager |  

Having different perspectives and ideas on the table is essential to the process of good software design.  We rely heavily on diversity within the team for those ideas and perspectives.  Women bring a unique perspective that when combined with other types of diversity ultimately leads to better design decisions and better software.”
David Green, Tasktop VP of Architecture |  

Women pursuing careers in STEM don’t always have it easy, but at Tasktop we’re striving to pave the way for the future women of STEM. One of the many reasons I’m proud to be a Tasktop employee.

Hey Guys… While the Women Strike, Consider This…

Wed, 03/08/2017 - 06:45

I believe that Women’s International Day has special significance this year. With the heightened political atmosphere charged with undertones of misogyny, the Women’s March shining a light on women’s rights and voice, and high profile news stories about sexual harassment, it just seems like Women’s International Day should be particularly meaningful and hopefully memorable in 2017.

So on this day where many women are striking to help bring attention to the value of women to our economy and our culture, I have a challenge for the men: be part of the conversation about women’s rights and social justice.  The fight for women’s rights and equality can only be won if it is not just women having the conversation.  We need men to play an active role in change.  And that means bringing men into the fold to talk about and consider why it is so important to have women in the workplace – treated and paid equally.   

I think it is apropos as men are at work looking around at empty desks, or, worse yet, looking around at not empty desks because your company is mostly male, to take a brief moment to consider why you value women in the workplace, and see them as equals, then share that story with your wife or partner or female friends.  Or grab a couple of your colleagues, gather around the water cooler and tell a few good stories about why you think having women as equal participants in our economy and workplaces is better for you, better for your company and better for the world.

To that end, at Tasktop, our CEO, Sr. Director of Business Development, Sr. Director of Engineering and Sr. Director of Technical Solutions have all contributed their thoughts as to why Tasktop is a better place to work and produces better work output because we value diversity and constantly strive to attract and retain women in the workplace.

Mik Kersten, Tasktop CEO: “From co-founder to every team and level of management, Tasktop has thrived by actively seeking and fostering an environment for women in tech.  The diversity in thinking and problem solving that results produces better innovations and better business results.  As high tech businesses continue to become more complex and more creative, the companies that are enlightened to this will outperform those who are stuck in the stone age of the boys’ clubs.”

Wesley Coehlo, Tasktop Sr. Director of Business Development: “Tasktop is absolutely a more successful business because women are involved. 75% of Tasktop’s Business Development team members are women. Because of their contributions we have been able to drive substantially higher revenue and become the most widely adopted integration technology for SDLC application vendors.”

Lucas Panjer, Tasktop Sr. Director of Engineering: “Tasktop is a more successful business because women are involved. (It’s really that simple). It’s the diversity, the perspective, and experiences that are brought to bear in design, decision making, strategy and execution. These things are concrete, important, and contribute to a better overall culture, product, and business. However, I would argue, these are less impactful reasons and that the impact of women is far bigger and simpler to explain. We’re not making to most of our world, society, and personal and professional opportunities if women, and any under-represented groups, aren’t fully present, participating, and at the fore. If women aren’t fully here, we’ve lost out on potential, and created a huge opportunity cost for ourselves, this company, and society as a whole.”

Shawn Minto, Tasktop Sr. Director of Technology Services: “Tasktop successfully supports our customers because of the dedication of the women involved. Whether it be working tirelessly with a customer to troubleshoot and solve a problem whilst building a strong relationship, understanding a new tool and it’s use in its entirety or working through complex legals to complete a sale, the women of Tasktop use their intelligence and skills to excel at their tasks. Through their perseverance, passion and commitment to everything that they do, they are integral to our success.”

These thoughts, coming from our male colleagues, make me extremely proud to be part of the Tasktop team and hopefully they will inspire other men to come forward and be a champion for women. It’s really being a champion for all.

Defect Management – Process Instead of Tracking

Tue, 03/07/2017 - 10:19

As software development continues to evolve, we need to reconsider how we manage defects. In the past, defect management focused merely on documentation and fixing the issues discovered. Today that is simply not enough, with modern Agile and highly integrated toolchains rendering the process ineffective.

Now we need to establish a process that tracks defects over the entire tool stack and use all possible information to improve the software development lifecycle. To achieve this, the process should have the following main goals:

  • Prevent defects (main goal)
  • Process should be risk driven
  • Measurement should be integrated into the development process and used by the whole team to improve process
  • Capture and analysis of the information should be automated as much as possible
  • Many defects are caused by an imperfect process – the process should be customizable based on the conclusions of the collected information

To reach these goals, one can take the following steps:

Defect Prevention – Standard processes and methodology help to reduce the risks of defects

Deliverable Baseline – Milestones should be defined where deliverable parts will be completed and ready for future work. Errors in a deliverable are not a defect until the deliverable is baselined

Defect Discovery – Every identified defect must be reported. A defect is only discovered when the development team has accepted the reported issue and it has been documented

Defect Resolution – The development team prioritizes, schedules and fixes the defect based on the risk and business impact of the issue. This step also includes the documentation and verification of the resolution

Process Improvement – Based on the collected information the processes should be identified and analyzed where this defect resulted to improve the process and prevent future similar defects.

Management Reporting – For all steps, it should be possible to use all collected information to allow reporting to assist with project management, process improvement and risk management.

When creating a defect management process that is right for your organization, it is also important to consider the stakeholders and the type of assets and artifacts involved.

Stakeholders

The defect management process involves many different stakeholders and they must be taken into consideration when developing an effective defect management system. Let’s consider the flow of information.

The author creates or reports the defect to the development team. Based on where the defect was identified, the authors could be developers, testers or members of the support team.

These people could also be consumers of the defect. Developers must verify, fix and document the resolution for the identified defect. Testers use the information to create new test definition based on the found defect and verify if the resolution solves the problem. The support team can use the information to deliver possible workarounds and clarify the reported issue if they are already reported as defects.

In smaller teams the developer could also be a contributor. In larger teams, the developer manager holds this role to prioritize, schedule and assign the defects that have been created. Another consumer could be the executives or management as they use the information for reports to gain insight and improve the development processes.

Artifacts and Assets

The main assets for the defect management are error reports with a description of the problem which should include detailed information to reproduce the issue. To reproduce the issue, screenshots or screen capture videos can help with this process. Log files, especially with detailed tracing and stack traces, are an important source for the development team to identify the defect. In the most agile or application lifecycle management systems, a defect can tracked and documented as artifact (such as an issue, defect, problem or bug).

How Tasktop can improve your defect management

In today’s integrated software development lifecycle, stakeholders use different types tools to fit their needs and defects need to be created and tracked in each of these systems. To prevent a lag in communication and loss of information, it’s possible to integrate the different tools with Tasktop and have an automated information flow across the whole tool stack.

Some common integration patterns are:

Developer-Tester Alignment – Defects can be synchronized into testing tools for creating tests based on the defect.  Additionally, testers can create defects in their favorite tool and can synchronize back in the developer tool for quick and easy resolution

Help Desk Integration – Support can create a defect which is synchronized based on a reported problem, allowing support to track the status of the defect. Furthermore, it is possible to use the information from existing defects to create a knowledgebase with known issues and workarounds

Security Issue Tracking – Security violations discovered by an application security tool are synchronized as defect for resolution

Supply Chain Integration –Defects can be synchronized in the quality assurance process to a contractor or 3rd party supplier for quick resolution

Consolidated Reporting – All defect information can be aggregated and consolidated to create reports for optimization of the defect management process

For further information on defect management and how Tasktop can help by integrating your software value stream, please visit our website and contact us today.

Test Management – An Integration Opportunity

Thu, 02/23/2017 - 13:38

Test management is the practice of managing, running and recording the results of a potentially complicated suite of automated and manual tests. Test management also provides visibility, traceability and control of the testing process to deliver high quality software, quickly.

Tools for test management are generally used to plan and manage tests, test runs and gather execution data from automated tests.  Additionally, they can typically manage multiple environments and provide a simple way to enter information about defects discovered during the process.

When we explore how organizations manage their software testing, it becomes clear how an integrated software toolchain greatly improves test management. This benefit becomes particularly clear when we consider how a connected workflow supports the stakeholders and the flow of assets and artifacts in the process and the common integration examples you encounter.

Stakeholders
There are several stakeholders involved in Test Management process:

  • Testers: consume requirements to create and execute test cases.
  • QA Managers: contribute to prioritization and high level planning of tests.
  • Developers: contribute to building the software and fixing defects found by testers.
  • Product Managers: define the requirements to be tested and determine release readiness.

Assets and Artifacts
Common assets used by Test Management tools are test plans, automated test scripts (code) and automated test frameworks (set up, tear down and result files). The most common artifacts used and produced by Test Management tools are test executions, test cases, test configurations, test sets, test instances, requirements and defects.

Integration Scenarios
Some common integration patterns used in Test Management process: Developer-Tester Alignment, Requirements Management-Test Planning Alignment, and Test Alignment.

  • Developer-Tester Alignment: defects generated by developers are synchronized into a Test Management tool so tests can be written against them to prevent regressions, and defects generated by testers are synchronized into development tools so that defects can be resolved.
  • Requirements Management-Test Planning Alignment: requirements generated by a Business Analyst in an Agile tool are synchronized into Test Management tool so that tests can be written against them in parallel to any development efforts.
  • Test Alignment: tests are generated by Agile team members to validate user stories during the Sprint. Tests are synchronized to Test Management tool so centralized testing organization can add additional details and automate the test as needed.

Integration Example
One popular test automation suite is Selenium. Selenium allows the organization to develop an extensive test suite for web-based products. One integration opportunity that is particular interesting is to capture the failures of test cases that have been run by Selenium in the test management tool (e.g. HPE ALM) and appropriately kick off development work for the issues. Therefore, the process is to inform the development and QA teams of issues that need to be attended to.

  • A QA team uses Selenium for automated testing of their web application and HPE ALM for test management, while the development team uses HPE ALM to resolve any defects found by Selenium.
  • When Selenium detects a failure in its testing, failures should be recorded and submitted as defects. Test case results should also be linked to their original test cases in the test management tool (HPE ALM in this case).

This integration that can extend the automation across the lifecycle using Tasktop Integration Hub, which automatically creates a new defect in HPE ALM for prioritization and resolution, extending the automation benefits to take advantage of automated tests, as well as the enterprise tool of choice for test management effectively.

To learn more about how Tasktop integrates your software value stream and improves your test management capability, visit our website and contact us today.

Strengthening Application Security in the Software Development Lifecycle

Tue, 02/21/2017 - 13:32

As software continues to pervade our lives, the security of that software continues to grow in importance. We need to keep private data private. We need to protect financial transactions and records. We need to protect online services from infiltration and attack.

We can obtain this protection through ‘Application Security’, which is all about building and delivering software that is safe and secure. And developing software within an integrated toolchain can greatly enhance security.

What’s application security?

Application Security encompasses activities such as:

  • Analyzing and testing software for security vulnerabilities
  • Managing and fixing vulnerabilities
  • Ensuring compliance with security standards
  • Reporting security statistics and metrics

There are several different categories of these tools, however the below are the most interesting in terms of software integration:

  • Static Application Security Testing (SAST) – used to analyze an application for security vulnerabilities without running it. This is accomplished by analyzing the application’s source code, byte code, and/or binaries for common patterns and indications of vulnerabilities.
  • Dynamic Application Security Testing (DAST) – analyze a running application for security vulnerabilities. They do this by automatically testing the running application against common exploits. This is similar to penetration testing (pen testing), but it is fully automated
  • Security Requirements tools – used for defining, prioritizing, and managing security requirements. These tools take the approach of introducing security directly into the software development lifecycle as specific requirements. Some of these tools can automatically generate security requirements based on rules and common security issues in a specified domain.

Other categories of Application Security tools, such as Web Application Firewalls (WAFs) and Runtime Application Self-Protection (RASP) tools, are more focussed on managing and defending against known security vulnerabilities in deployed software, and are somewhat less interesting for integration.

There are many vendors of Application Security tools. Some of the most popular are: Whitehat, who makes SAST and DAST tools; IBM, whose AppScan suite includes several SAST and DAST tools; SD Elements, who makes Security Requirements tools; HPE, whose Fortify suite includes SAST, DAST, and RASP tools; Veracode, who produces SAST and DAST tools; and Checkmarx, offering a source code analysis SAST tool. 

How is software integration relevant to application security?

When looking to integrate new tools into your software delivery process, it is important to first identify the stakeholders of those tools, and the assets consumed by and artifacts produced by those tools.

The most common stakeholders of Application Security tools are:

  • Security Professionals: write security requirements, prioritize vulnerabilities, configure rules for SAST and DAST tools, and consume security statistics, metrics, and compliance reports
  • Developers: implement security requirements in the software they are building, and fix vulnerabilities reported by SAST and DAST tools
  • Testers: create and execute manual security test plans based on security requirements
  • Managers: consume high level security reports, with a focus on the business and financial benefits of security efforts.

Common assets consumed by Application Security tools include:

  • Source code
  • Byte code
  • Binaries
  • Security rules

Common artifacts produced by Application Security include:

  • Vulnerabilities
  • Suggested fixes
  • Security requirments
  • Security statistics and metrics

With so many people and assets involved in the workflow, we need all stakeholders to be able to trace artifacts, spot vulnerabilities and have automated reporting to be able to address any issues as they arise. An integrated workflow does this, as illustrated in the below workflow.

Common integration scenarios

The three Software Lifecycle Integration (SLI) patterns we’ll be looking at are Requirements Traceability, Security Vulnerabilities to Development, and the Consolidated Reporting Unification Pattern.

  • Requirements Traceability: the goal is to be able to trace each code change all the way back up to the original requirement. When it comes to Application Security, we want security requirements to be included in this traceability graph. To accomplish this we need to link requirements generated and managed by Security Requirements tools into the Project and Portfolio Management (PPM), Requirements Management, and/or Agile tools where we manage other requirements and user stories. We can currently do this with a Gateway integration in Tasktop Integration Hub, by adding a Gateway collection that accepts requirements from our Security Requirements tool and creates matching requirements or user stories in our PPM, Requirements Management, or Agile tool.
  • Security Vulnerabilities to Development: this is about automatically reporting security vulnerabilities to our development teams to quickly fix them. To accomplish this we need to link vulnerabilities reported by SAST and DAST tools into our Defects Management or Agile tools, where developers will see them and work on a fix. We can currently do this with a Gateway integration in Tasktop Integration Hub, by adding a Gateway collection that accepts vulnerabilities from SAST and DAST tools and creates matching defects in our Defects Management or Agile tool.
  • Consolidated Reporting Unification Pattern aims to consolidate development data from the various tools used by teams across an organization so that unified reports can be generated. When it comes to Application Security, we want data about security requirements and vulnerabilities included so that it can be reported on too. We need to collect these artifacts produced by our Application Security tools into our data warehouse. We can currently accomplish this with a Gateway Data integration in Tasktop Integration Hub, by creating a Gateway collection that accepts security requirements and vulnerabilities from our various Application Security tools and flows them into a common Data collection.

For further information on how Tasktop integrates your software value stream and enhances Application Security, visit our website and contact us today.

Key Lessons From A Big Software Product Launch

Thu, 02/16/2017 - 14:14

Last month was a seminal moment for us – we launched our next-generation software integration product, Tasktop. As ever, the product development journey was one hell of a ride.

Three years. 500,000 lines of code. 20,000 automated tests. 5,000 wiki pages. Hundreds of design sessions. Many mistakes. Some tears. A few moments of deep soul searching. And many days filled with tremendous pride watching a team pull together to deliver something special – something that we truly believe will transform the way people think about integration.

In true Agile style, I’m a big believer in retrospection, ascertaining key lessons and gleaning takeaways from the experience to improve the way we work. So what did we learn this time round?

It’s ALL about the people and trust.

To combine the powers of talented individuals and turn them into a true team, you need trust. All of our team will admit there were some rocky moments at the beginning and that’s only natural. Yet with hard work and perseverance, you can forge a close powerful unit that runs like a well-oiled machine.

Trust that the product manager and designers have fastidiously analyzed what the customers want and are requesting an accurate representation of their needs. And trust that architects and developers are designing a codebase and architecture that can be built on (while at the same time being nimble and lightweight as possible).

If I had a ‘magic button’ (everyone at Tasktop knows my obsession with magic buttons!), it would be the ‘trust’ button. Of course that is not possible – trust is built up over time and can’t be rushed – but once you’ve got it, man, is it an addiction!

It takes a village.

Building a pioneering software product isn’t all about the developers (although they’re obviously integral). To get the best result possible, you need:

  • Strong user-focused product managers
  • Imaginative and creative user experience designers
  • QA professionals that see the BIG picture (as well as thousands of details)
  • Technical writers willing to rethink documentation from the ground up

Throw in sales and marketing into mix and the village becomes more of a city by the end. Embrace it, take everyone in and watch your product development flourish in this environment.

Don’t give up and don’t give in.

Set a vision and DEMAND a relentless pursuit of that vision. When it seems like everything is being called into question, reach deep inside and stick to your core vision. It’s your constant, your north star.

Now, this doesn’t mean that you can’t alter and tweak things along the way – in fact, I would say if you don’t do a good amount of that you are heading for potential disaster. But if you don’t believe in the core vision that was set, then you will lose your way.

Have urgency but at the same time patience.

There is a somewhat elusive proper balance of patience and urgency. If I had another magic button I would use it for this purpose…but since I don’t, I think your best bet is to trust your gut to know when to push, and when to step back and let things steep.

Laugh a little. Or a lot.

I treasure the moments during the course of building Tasktop where we were laughing so hard that we cried. The thing I love is that I can’t even remember many of the funny moments that we shared – there were too many. And, yes, there were also a not insignificant number of moments where there was frustration and downright anger. But those memories aren’t what stick – what sticks are the moments where we overcame the hurdle, pulled together and laughed at ourselves.

Be grateful for those who support you.

Last but definitely not least, appreciate and thank the people that made the vision come to life. That doesn’t just include the direct teams that were involved, but also those who support you outside of work such as your friends and families.

The family that puts up with 28 trips to Vancouver in five years. The family that lives and breathes the ups and downs with you. The family that wants to see this product succeed almost more than you do!

To that end, I would like to thank my family; my husband, my son and my daughter – I thank all of you for putting up with the craziness of the last three years! If only the walls could talk… but instead, my 10 year old daughter decided to write down her own thoughts a few weeks before the launch:

“3,2,1…BLASTOFF!!!!!! This launch is all my mom has talked about (and the election) for the past 3 months. How much she has been talking about it shows that this launch must be really important. You should get the front page on the newspaper – which if you haven’t read since the internet came out I don’t blame you.

To be frank, I actually don’t know what the big product is supposed to be, but from past experience, Mommy’s team gets all upset when a product doesn’t work. Also, another benefit of getting this thing to work is that everybody will be super happy and joyful.

But I will say, whoever scheduled the timing of her big trip to Vancouver for the launch must not have realized that the big trip almost makes me not see my mom for two weeks because I am going to Hawaii (yes, my parents are that awesome they are letting me go to Hawaii for a week as a 10th birthday present).

But, of course, don’t let that stop you from making this Tasktop’s best product yet. Make the product, make it work, and make it the most awesome thing the world has ever seen.

“Tasktop, the most empowering force ever!” I can see it in those big letters on the front page. Yes, I am waiting for the day I see those exact words marching bravely across the front page of the newspaper. So, don’t just stand there, get up and show the amazing, futuristic, and wonderful world of Tasktop.”

– Bailey Hall, one Tasktop’s youngest and brightest thought leaders.

I’d like to thank everyone involved in making the launch of Tasktop a success as we move on the next significant stage in the product’s development – getting it to market and harnessing its many capabilities to drive our customers’ large-scale Agile and DevOps transformations.

For more info on the new product, check out our new site www.tasktop.com

Value Stream Integration

Tue, 02/14/2017 - 07:27

Every business is now a digital business – if you’re not then you’re vulnerable to disruption. Traditional business models, infrastructures and operations are in a flux as software continues to usurp and transform the status quo. Look at Airbnb and hotels, Uber and taxis, Netflix and films, Amazon and retail…you get the picture. The message is clear: keep up or be left behind. Or to paraphrase organizational theorist Geoffrey Moore: “Innovate or die!”.

Most CIOs are acutely aware of this state-of-play and are under pressure to optimize their organization’s software delivery process. Many are investing in new staff, tools and processes to drive Agile and DevOps initiatives and are often encouraged (and given false hope) by initial success – especially at a local level. Then they try to scale and become stuck. There’s too many tools, people and disciplines. The toolchain is fragmented and their transformations are failing.

Why does this happen? The problem is that best-of-breed Agile and DevOps tools don’t work together, creating friction in the way stakeholders interact. This causes manual work that increases cost, reduces velocity, frustrates team members, all the while making it difficult for management to have the visibility and traceability that they so desperately need to make key business-decisions.

Organizations continually adopt new tools to improve the individual disciplines they serve – test automation, requirements management, agile planning, DevOps and the like. By using these tools, stakeholders create work specifically for collaborating with their colleagues. But that collaboration is compromised precisely because each of these disciplines are using different, unintegrated tools.

Furthermore, managers want to see metrics and dashboards for real-time status reports so they optimize a process and/or ensure compliance. However, with a fragmented toolchain it is nearly impossible to obtain a holistic view. And everyone knows that the only way to improve a process is to look at it holistically.

What can be done? The key is to integrate the value stream.

Value Stream: Sequence of activities required to design, produce, and provide a specific good or service, and along which information, materials, and worth flows – Business Dictionary[1] 

When we talk about an integrated value stream in software delivery, we mean bringing together the tools across the software development lifecycle to radically improve team effectiveness and project visibility which allows you to:

  • Eliminate the wasted time, errors and rework by automating the flow of work across people, teams, companies and tools. This increases the team’s velocity and capacity, enhances collaboration and improves employee satisfaction
  • Enable real-time visibility into the state of application delivery by coalescing activity data across the value stream. This data can be used for management, optimization and governance reporting and dashboarding, as well as data for automated traceability reporting
  • Create a modular, responsive tool infrastructure that can itself react and adapt to new tool innovations and changing working practices, enabling tools to be plugged in and out on-the-fly

Until now, creating this sort of integrated software delivery value stream has been too hard. Companies adopted point-to-point and homegrown integrations that were costly, brittle and unable to scale. It was simply too difficult to automate the flow of information across a large-scale tool ecosystem, making value stream integration and visibility financially unviable. But now the game has changed.

Our model-based approach dramatically reduces the complexity of creating and managing integrations, and scales to dozens of tools, thousands of users and millions of artifacts. For the first time, integration and visibility across the entire software value stream are economically possible and is helping some of the world’s most successful organizations – including nearly 50% of the Fortune 100 – to thrive in a digital world.

  • Are you finding it difficult to give your managers the visibility into how things are going?
  • Are your colleagues complaining that they spend a lot of wasted time doing administration?
  • Is there a disconnect between your Agile methods and the need for governance and compliance?

If so, check out our videos of about our one-of-a-kind model-integration approach and speak to us today about integrating your value stream and to drive your Agile and DevOps transformations.

[1] Business Dictionary

Reimagining Software Integration

Thu, 02/09/2017 - 12:40

For too long, software lifecycle integration has been viewed as the red-headed stepchild at organizations – an unglamorous chore that is often considered a developer issue and a developer issue alone. That perception must change – it’s actually a critical organizational issue and this misconception is why Tasktop is leading the charge in rebranding integration.

The word ‘integration’ shouldn’t make your eyes glaze over, nor should it be last on the agenda when talking to management about how to succeed at scale. Integration should elicit intrigue and demand immediate attention – it’s THAT important. Why? Because integration is, in fact, precisely what will allow you to achieve organizational success at scale.

Actually, let me say that stronger; without integration, you won’t be able to scale. Wait…stronger…succeeding at scale is 100% dependent on integration. Integration is precisely how you will achieve your business goals, be it an agile transformation, DevOps initiative or improving your software delivery capabilities.

We’re reimagining integration to fundamentally change the way people think about how they connect software development tools and transform the way they deliver software. Let me show you how…

Imagine a world where… you can configure a sophisticated integration between two complex systems in under an hour. How? With a completely reimagined user experience that presents itself not with bits and bytes and XML configuration but instead in a visual, intuitive, logical way that aligns with what you already know and how you think about the tools you use. No coding required. This video further explains this benefit.

Imagine a world where… after you configure your first integration, you can scale to hundreds of projects instantly, thanks to the magic of models, which are the secret sauce behind being able to map once and then scale infinitely – as explained in this video.

Imagine a world where… The tool you are using to integrate has already codified so much about the end tools that integrations almost create themselves. Scary? Ok, maybe a little. But our smart mappings and auto-suggests of flows show the power of connectors that are domain aware.

Imagine a world where… all integrations work. All the time. With Tasktop, it is built in from the ground up. Nothing runs on Tasktop that hasn’t been through our ‘Integration Factory – a unique testing infrastructure that runs over 500k tests a day across 300+ connectors.

But those are all just features … and above I said that integration was a 100% dependency to be able to scale… so let’s talk about that a bit.

Scaling means two things: more people and more processes. And more people and more processes means more tools. But if those tools don’t operate as one, scaling quickly turns into a creaky machine with all kinds of manual handoffs, endless meetings and unhappy practitioners. So integration means getting all these tools, teams and disciplines to act as one.

But can you really get these various tools that aren’t designed to work together to ‘act as one’? The short answer is “Yes!”. The longer answer is “Yes – but only with Tasktop.” Only with Tasktop can you ensure your tool landscape consistently functions as a single, powerful entity, no matter how many systems you add to the tool stack.

We know the key ingredients that enable you to scale your toolchain so that you can drive organizational objectives and consistently deliver customer value. In fact, our ‘reimagined world of integration’ places strong emphasis on the software value stream and it seems we’re on to something big. Since June 2016, we’ve received extremely positive feedback to our new proposition from participants in our Early Access Program including:

Value stream integration is the next significant chapter in software delivery and that’s why we have launched Tasktop Integration Hub, our new product and pioneering approach to large-scale software integration.

If you missed our live-streamed launch event last week, you can watch the recording here. In the video, our CEO and founder Dr. Mik Kersten and myself introduce the product, while customers explain the product’s importance and how Tasktop is supporting their Agile and DevOps transformations at scale.

Let’s transform the technology landscape together and be part of history.

Eliminate the PMO Scavenger Hunt

Wed, 02/08/2017 - 11:28

The sheer multitude of projects that an organization undertakes every day puts enormous pressure on the Project Management Office (PMO). And considering that 97 percent of organizations believe project management is critical to business performance and organizational success[1], it’s paramount to ensure they have the best intel to do their job efficiently.

Project managers rely heavily on the PMO to keep them abreast of the latest information regarding their projects, as well as other projects that may have an impact on their work. They also look to the PMO to provide key insights on a product’s journey from concept to delivery, identifying bottlenecks ahead of time to ensure smooth sailing. However, providing such a holistic overview is a huge challenge, which may explain why 50 percent of all PMOs close within just three years[2].

One of the key factors behind a PMOs downfall is related to their access to vital data, which enables them to build the all-important real-time picture of the project portfolio. With regards to software development and delivery, they need end-to-end visibility and traceability throughout lifecycle so they can make key-decisions on influential matters such as resource capacity, labor headcount, project budgeting, IT strategy and so forth.

Traditionally they have acquired this information through a cumbersome, time-consuming scavenger hunt between teams and tools that often work in siloes. Without an intuitive system to gather this valuable information in one place, they’re forced to spend valuable time chasing down status reports, logging into specific tools, merging spreadsheets and involving themselves with other onerous manual work – precious time that could better spent elsewhere.

But it doesn’t have be that way – not with an integrated software lifecycle providing the visibility, traceability and valuable data that they desperately need to their job to the best of their abilities.

For further information, please download our guidelines to eliminating the PMO scavenger hunt.

You can also speak to our dedicated team who can best advise how to optimize your PMO.

[1] PwC, Global Project Management Report, 2012
[2] http://www.keyedin.com/keyedinprojects/article/why-pmos-fail-5-shocking-pmo-statistics/

TasktopLIVE: The Software Delivery Game is Changing

Mon, 02/06/2017 - 09:44

Last Tuesday, we unveiled our next-generation product Tasktop Integration Hub at our headquarters in Vancouver. During a live-stream event – TasktopLIVE – we set our new approach to software delivery and explained how we’re redefining the Agile and DevOps landscape.

CEO and founder, Dr Mik Kersten, kicked off proceedings by providing acute analysis of the software delivery landscape and the current potency of Agile and DevOps transformations: “Agile and DevOps have come of age – we’re seeing a lot of success at startup level, but huge struggles when organizations try to scale these transformations.”

Summarizing Tasktop’s evolution over the last ten years, Kersten talked through how the company has continually built solutions that optimize the whole software lifecycle through sophisticated integration and market-leading expertise. The latest service offering, he emphasized, is a natural response to the how software is evolving and how people work and use applications.

“All teams work in their best-of-breed tools to improve functionality in their specific roles, but these tools aren’t connected or communicating. The result is a fragmented value stream that lacks the visibility, traceability, governance and security required to continuously deliver business value.”

To address this, we have devised an entirely new approach to Agile and DevOps. Tasktop allows enterprises to define value stream artifacts and flows in a unified Integration Hub, ensuring that teams get the productivity benefits of each tool, while the business realizes immediate ROI from eliminating the waste of manual entry, automating end-to-end traceability and easily achieving end-to-end visibility.

Then Nicole Bryan, Tasktop’s Vice President of Product Management, explained how easily and simply Tasktop can integrate as many tools as required for seamless scalability (a process that is “also quite fun!”). Also speaking was a selection of customers, all of whom elaborated on how Tasktop has helped their DevOps and Agile transformations and why the new approach is so important.

Carmen DeArdo, Technical Director at Nationwide, explained how Tasktop boosts his job performance: “I have to figure out how to make things work better across our delivery value stream. Tasktop helps me to do that and enables us to build exciting applications.”

DeArdo also reiterated on how important visibility in the value stream is: “You can be a great driver and have a great car, but if it’s foggy and you can’t see the road, you’re going to slow down because you don’t trust what’s going on around you.”

Meanwhile Mark Wanish, former SVP, Consumer Sales & Marketing Technology Executive at Bank of America, has been involved in Agile transformations for over a decade and is a great advocate for Tasktop’s approach of focusing on the whole value stream: “You can make developers more Agile and improve their capabilities, but you can’t neglect elsewhere in the organization – for Agile to be a success, everyone needs to involved and delivering value.”

Also on the panel was Jacquelyn Smith, Senior Director, Platform Technologies at Comcast, who has recently began working with Tasktop following a big merger between Comcast and another engineering company: “Following the merger, we had an abundance of toolsets and instances thereof – we went from six tools to fifty! We wanted to scale products to serve our customers, but also be more sensible about how we move data between tools. We’ve just started working with Tasktop and they’re already helping us to support large-scale integrations and enabling us to work more simply, easier and faster.”

You can watch the whole recording of TasktopLIVE event here. For further information on Tasktop Integration Hub, please check out our brand-new website, which is jam-packed with new engaging content.

Interested in adopting our pioneering approach to software delivery? Contact us today and request a demo. 

Tasktop Integration Hub: Features and Models

Wed, 02/01/2017 - 09:30

At Tasktop, we’re very excited about our recent Tasktop Integration Hub launch. With this new product, we didn’t just set out to make incremental improvements. We set out to reimagine integration. Tasktop Integration Hub is one solution that handles pretty much all software delivery integration needs. It provides the right information to the right person in the right tool at the right time.

I set out to write about the features of this new product, but while writing, I had a few realizations…

First, reading a laundry list of features is boring. If you want to see the features along with some short videos, please visit our feature page.  You will find brief descriptions along with one-minute videos. These videos will do a much better job of ‘showing’ you the features, rather than me describing them.

Second, features are probably not what you care about. But you probably care that it works. That it’s powerful enough to support your organization. Tasktop just celebrated our 10th birthday. We’ve spent a decade listening to customers, and we’ve distilled thousands of hours of real-world customer feedback and use cases into a singular tool. And it does work.

During the past ten years, we’ve noticed that integration is often the last thing a company addresses in the software development process. Enterprises come to Tasktop after selecting tools and workflows. Sometimes, customers come to us after they try to handle integration on their own. This means we’ve had to be flexible in order to fit into almost any process. It also means we’ve seen a lot. It also made us work harder to provide the best integration tool on the market.

We understand that integration is about efficiency and ease of use. A good tool lets you do what you need to do. A great tool gets out of your way and lets you do what you want. A world-class tool helps you do things you never knew you wanted to do in the first place.

What our customers highlighted as critical. How we listened:

  • Connecting the tools they use.
    • We connect to over 45 tools using fully tested connectors. And we’ve added a new integration style that allows our customers to push events from a wide variety of tools.
  • Scaling existing integrations.
    • We understand how important it is to quickly add new projects to existing integrations. Tasktop has added Model-based integration management so it is simple to add a 2nd, 3rd, or even 100th project to the integration and our solution ‘understands’ what our customer is working to accomplish with the integration.
  • Flexibility.
    • Our customers must implement business rules about what goes where and when. Tasktop can filter and route artifacts as well as comply with customer needs around frequency (and direction) of specific field updates.
  • Security.
    • We provide secure log-in via our web-based interface.
  • Minimize server traffic.
    • We consistently hear from potential customers about their concerns around server overload caused by near real-time updates. Tasktop Integration Hub has implemented Smart Change Detection to limit the load on tools. It senses the changes to artifacts and maintains the smallest footprint.

I did write that I wasn’t going to focus on features, but there is one important new aspect of Tasktop Integration Hub that I would like to cover. Before I do, I wanted to mention that Tasktop Integration Hub includes the most used/important/popular features found in Tasktop Sync. It includes:

  • Artifact relationship: maintains artifact relationships across all your tools.
  • Person mapping across tools: you know who made a comment, even if they made the comment in another tool.
  • Comment synchronization: people can converse in their tool of choice instead of relying on emails that are never attached to the persistent artifact.
  • Attachment synchronization: to prevent duplicate login to separate tools and cut down on emails.
  • Routing: so that each artifact can be synchronized to the right place on the other side. To be honest, we’ve improved this enough to merit its own blog post.
  • And many more.

So now let me point out one of the things that makes Tasktop unique… and will make your integrations much more robust.

Introducing… Models

Models are Tasktop’s way of providing a universal translator for all tools. All tools speak different languages. Historically, integration tools rely on a 1-1 mapping between tools. That’s great if there are only two tools, but we’ve seen the pain that occurs when companies want to integrate three, five, six or more tools. The number of ‘translations’ between tool languages becomes untenable. With six tools, there are already 15 translations needed. Think about what this does to tool lock-in. Changing out one of these ‘languages’ for another requires five new translations. Models fix this.

Integrating Without Models

Integrating With Models

Models allow your organization to normalize the information flowing between tools.

You may be asking yourself “What is a Model?”

A Model is your abstract definition of a given artifact. It’s how an organization defines a specific ‘thing.’ For example, what defines a Defect in your organization? What are the common fields that are required to specify a Defect at your company? Not only that, but what are the values in those fields? For example, do you specify the Severity of your defects as Low, Medium or High? Or do you refer to them as Sev-1, Sev-2, Sev-3, Sev-4? Models let your organization decide how Defects should be ‘thought of’. The beauty of a Model is that the end tools don’t need to use the same field values. That’s part of the translation capability that Tasktop Integration Hub provides.

This may sound complicated, but it’s not. Tasktop comes preconfigured with eight models. Think of these as starter Models. Maybe you’ll need a new model. Maybe you’ll only need to tweak an existing Model. Tasktop Integration Hub provides that flexibility.

The beauty of Models is once one integration is created between two tools, the process of adding another project from each tool to the integration takes a matter of seconds. See the Scaling Integrations video.

If you’re still interested in learning more about what Tasktop Integration Hub looks like, how easy it is to use and how easy it is to scale, you can check out the Tasktop Integration Hub Demo. This 11 minute demo illustrates how simple it can be to set up and scale an entire integration scenario involving four separate tools.

As Carl Sagan said, “Any technology sufficiently advanced is indistinguishable from magic.”  Tasktop isn’t magic, but we sure want it to feel that way to our customers.

Tasktop Integration Hub is a world-class integration tool that will help you integrate tools in a way that could only be imagined before today.

Tasktop Integration Hub Launched, Value Stream Integration for Enterprise Agile & DevOps

Tue, 01/31/2017 - 04:50

Agile has won, and DevOps is now standard at startups worldwide.  With all of the success stories we are hearing at nearly every conference we attend, why is it that the conversations within our conference rooms continue to bring up a lack of clear business results, or outright deployment and adoption failures?

The success of lean practices for software delivery is critical to digital transformation and innovation, and the failure to execute on them opens the door to disruption. Yet organizations rooted in “waterfall” practices are thinking about scaling Agile and DevOps the wrong way.  In prior decades, the way to succeed with new methodologies involved betting on the right platform.  But in the world of Agile and DevOps, there is no one platform.  Instead, we are witnessing a large-scale disaggregation of the tool chain towards the best-of-breed solutions.  For large-scale Agile and DevOps transformations to succeed, we must shift our thinking from end-to-end platform to tool chain modularity.

Today I am thrilled to announce that after over three years of development, we are releasing a whole new approach to scaling Agile and DevOps.  The Tasktop Integration Hub completely re-imagines the integration layer of software delivery, and connects the end to end flow of value from business initiative to delighted customer and revenue results.  To do this we have created the new concept of Model-Based Integration, where we allow organizations to define their Value Stream Integration layer right within Tasktop, automating flow across people, processes and tools.  You can then map every best-of-breed tool into that value stream, easily scaling from a single Agile team to tens of thousands of happy and productive IT staff.  And you can continue connecting new tools as your tool chain evolves, giving you the power of modularity for the tool chain itself. Tasktop makes the tool chain Agile and adaptable to your business needs.

This release unifies our previous Tasktop Sync, Gateway and Data products into a single Value Stream Integration offering that easily scales to connect hundreds of projects, tens of thousands of users and millions of artifacts.  All with a beautiful and intuitive web UI that enables you to connect all of your tools without writing a single line of code thanks to Model-Based Integration.

Over the coming days we will be posting more detail about what we have done, how we have done it, and how it changes the landscape of enterprise Agile and DevOps.  For now, check out the following videos to get a quick overview of the product highlights and a whole new way to see the ROI of your transformation.

This release is the culmination of not only hundreds of people and years of development at Tasktop, but countless hours and effort from a dozen leading IT organizations who became a part of our Early Access program in April, and who have helped take the concepts from whiteboards and mock-ups to using them in production today. I encourage you watch some of their testimonials at our TasktopLIVE event and to join the conversation.

Product highlights include:

  • A world-first model-based paradigm for visually connecting dozens of tools across hundreds of projects without requiring any coding. For example, user stories, defects and support ticket models are defined in Tasktop, and then can be easily mapped across dozens of different projects and tools.
  • Support for applying different styles of integration across tools. For example, Agile and ITSM tools can be integrated for defect/ticket unification then easily connected to a database for instant Mean Time to Resolution (MTTR) metrics.
  • Easy scaling across hundreds of projects. By defining models that span projects and tools, new projects can be on-boarded easily and connected to the value stream.
  • All integrations work all of the time thanks to Tasktop’s unique Integration Factory. Multiple versions of the same tool can be connected, along with old versions of legacy tools and the frequently updated APIs of SaaS tools, without breaking because Tasktop tests all version combinations. Currently, Tasktop supports 51 tools and 364 versions.

For more see the Product Overview or Request a Demo.

APIs Are Not The Keys To The Integration Kingdom

Mon, 01/30/2017 - 10:53

Imagine a nirvana where software lifecycle integration just works. A place where an intricate ecosystem of best-in-class tools for software development and delivery runs seamlessly and its users benefit greatly from the steady flow of real-time information. Despite being a constant hub of activity, it’s also a place of calm – a Zen environment for everyone involved in the toolchain.

Every team – from testers to developers to PMOs to business analysts and PPMs – are in sync. Thanks to the end-to-end integrated workflow, everyone in the value chain has the visibility and traceability required to work on the project to the best of their abilities. Productivity is optimized and IT initiatives are driving their organization forward, helping them to consistently deliver high quality products and services to their customers.

At the heart of this nirvana are APIs. In this fantasy, APIs provide developers with all the essential information they need to make two endpoints connect. They possess this information because the vendors built their respective tools with integration in mind, ensuring to include detailed documentation to help external developers to feed the repository into their internal API.

If only this nirvana existed. The reality is integration is one of the hardest technical processes that an organization can face. It’s an all-encompassing job and APIs have a starring role that significantly influences the outcome.

Now, using a tool’s APIs is the best and most stable way to access the information stored in the tool’s underlying database. APIs facilitate access to the endpoint’s capabilities and the artifacts that they manage, and they can also enforce business logic that prevents third parties from unauthorized activities.

However, while APIs are a critical piece of the integration puzzle, they also highlight the delicate intricacies involved in the integration process. Many of these APIs were actually created for the vendor’s convenience in building a tiered architecture, not for third party integration. They were not made with a consumer in mind; an afterthought if you will.

As a result, these APIs are often poorly documented and incomplete:

  • Model objects don’t necessarily work correctly together
  • Data structures, how and when calls can be made and the side effects of operations are all often excluded from the documentation
  • Poor error handling and error messages are common
  • Edge cases and bugs are rarely documented
  • Some APIs aren’t fully tested e.g. some tools may return success even when all charges aren’t made
  • Some APIs have unexpected side effects/behavior e.g. updates that result in delays for changes to appear
  • Some APIs have inconsistencies between versions e.g. different vendor endpoints to retrieve tags
  • Because they’re not documented, figuring out how to handle these issues requires a great deal of trial and error. And sadly, often the vendor’s customer support staff is unaware of many of these issues and how to use their API, so finding resolution often requires access to the endpoint vendor’s development teams

So what does this all mean exactly? Consider a kitchen for a second; the pantry is full of ingredients (APIs) to make a recipe (the formula for the integration), but without correct labelling (documentation of the APIs), we have no idea of what they are, their expiry date, how best to use them etc. Any attempt at cooking an integration will likely end in disaster.

What’s worse, these APIs can change as the endpoint vendors upgrade their tools. Depending on how thoroughly the vendor tests, documents and notifies users of API changes, these changes can break the carefully crafted integrations. For SaaS and on-demand applications, these upgrades happen frequently and sometimes fairly silently.

So any API-based connection is little more than just glue holding together two systems – a temporary and unreliable measure. There’s no maintenance or intelligence built into the tool to ensure the systems are continuously working together. In a software world that faces a relentless barrage of planned and unplanned technical changes and issues, such a brittle integration is unacceptable. Your software develop team will suffer, as will your overheads and the value you deliver.

With that in mind, we need to find a way to label the APIs and gain a better of understanding of how to use them collectively to create first class integrations. The first step is always to do an exhaustive technical analysis of the tool:

  • How is the tool used in practice?
  • What are the object models that represent the artifacts, projects, users and workflows inherent in the tool?
  • What are the standard artifacts and attributes, and how do we (quickly and easily) handle continual customizations such as additions and changes to the standard objects?
  • How do we link artifacts, create children and track time?
  • Are there restrictions on the transitions of the status of an artifact?
  • How do we use the APIs?

This analysis can be very time-consuming, especially when you factor in poor documentation and design flaws (in the context of integration). And what at first appear to be pretty simple tasks actually turn out to be surprisingly hard. For instance, ServiceNow has 26 individual permissions to go through – no quick or easy endeavor. The results of any analysis should reveal the knowledge discrepancies and highlight how the lack of information hampers the possibility/quality of the integration.

By now, you probably have a fair idea that using APIs to create an integration takes a herculean amount of effort behind the scenes. And trust us, that’s only the tip of the iceberg. We’ve spent over a decade building up an encyclopedic understanding of the SLI and the education never stops.

Fortunately, we’re fully equipped with the right brains, technology and processes to stay at the vanguard of the market, using domain expertise and semantic understanding to create robust large-scale integrations that grow with your software landscape.

For more information, please:
Speak to our dedicated team
Visit our product pages
Download our eBook on ‘Why Integration Is Hard’

We will also be discussing in detail the huge challenges involved in software lifecycle integration tomorrow (Tuesday, January 31st) during our special live streamed event, TasktopLIVE. You can find out more about the event here.

Why Do Software Lifecycle Integration Projects Fail?

Fri, 01/27/2017 - 13:16

Most software lifecycle integration (SLI) projects fail because organizations underestimate just how difficult integration is and are unaware that software development and delivery tools are not designed to integrate.

Endpoint tools were built by their vendors to create a tiered architecture and not necessarily for third party integration. The tools are built for a specific function e.g. JIRA for Agile Project Management, HPE ALM for Agile Lifecycle Management, CA PPM (Clarity) for Product and Portfolio Management and so on. They’re best-of-breed tools built to optimize their user’s capabilities in their respective roles.

By looking at connecting ‘just’ two tools, you quickly see how technical clashes between them create a litany of complications that undermine the ability for the two tools to communicate – despite this function being the bare minimum requirement of any integration.

You don’t want to just mirror one artifact in one tool in another – you want that artifact to be understood across the value chain so that all teams and tools understand the context of what they’re working with, and towards, for optimized collaboration. To do this, we must ascertain:

  • How each individual tool is used and by whom
  • What are the object models that represent the artifacts, projects, users, workflow etc.
  • How we handle continual customizations such as changes and additions
  • Clarify any restrictions, users behaviour, needs etc.
  • Ascertain any future expansion/scalability objectives

Each endpoint has its own model of ‘like objects’ (such as a defect) and in theory they have the same type of data. But each tool stores and labels this data differently, and can be modified with custom attributes and permissions and with different formats and values.

For instance, the defect tracking tool may have three priority levels (1, 2, 3) but the agile planning tool may have four (critical, major, minor, trivial). They have the same understanding of the artifact, but possess no means to accurately communicate to each other. They need a multi-lingual translator.

These influences mean any synchronization between artifacts must occur between widely divergent data models, which in a way creates a ‘data shoehorning’ of sorts. You’re trying to align two concepts that don’t naturally match and this will create conflict. Or what we call ‘impedance mismatch’.

Impedance mismatch occurs because of the different language being used and the relationships that artifacts have with other objects. These relationships must be understood to provide context. They do not live independently of one another. Each story is relevant to an epic in a tool and there’s many chapters within that story that must be understood and communicated for tools to interoperate. We call this ‘semantic understanding’.

In seeking this information, it’s only natural to consult the tool’s APIs. And it is at this junction that we discover our first real hurdle to integration. APIs rarely provide this ‘integration essential’ information because they’re not documented for such a process – as touched on earlier, they’re created for a specific purpose by the vendor.

If there is any documentation, then it’s often vague and/or incomplete. And of course, all tools are subject to sudden upgrades and changes – especially given the rise of on-demand and SaaS applications – which will instantly undermine any integration. You can read more about why APIs are a double-edged sword for integration in our blog ‘APIs are not the keys to the integration kingdom’ next week.

Furthermore, connecting two end points is only the start. The real challenge is when you want to connect a third, fourth or fifth connection, which is what you want to be aiming to do for effective large-scale integration. It would be only natural to assume that once the ‘hard work’ of the first connection has been done, any additional integration would be a simple and iterative process. Sadly, this is not the case – there is no one proven formula. The complexity only increases:


While the learning curve isn’t as steep as the first integration, the curve doesn’t flatten as one would hope. Some of the issues that reared their ugly heads in the first integration will return. Once again you’ll have to run the technical analysis of the tool, establish how artifacts are represented, identify the similarities and reconcile the differences with other tools in your software workflow. The API information will once again be little help and there will be more unforeseen circumstances.

So how do you safeguard your software development ecosystem with a robust, steadfast integration?

The key is a solution that understands the complex science of large-scale software development integration and possesses the ‘next level’ information that APIs don’t provide. A model-based solution that provides the multi-lingual translator to ensure that all endpoints, regardless of number, can communicate with each other. If you’re investigating a solution you need to make sure it includes the following:

  • Semantic understanding
  • Domain expertise
  • Neutral relationships with endpoint vendors that allows for deep understanding of the tools
  • Testing infrastructure that ensures integrations are always working

For more information, please:
Download our eBook on ‘Why Integration Is Hard’
Speak to our dedicated team
Visit our product pages

We will also be discussing the huge challenges involved in software lifecycle integration on Tuesday, 31st January – during our special live streamed event, TasktopLIVE. You can find out more about the event here.

Bringing ITSM and DevOps together

Thu, 01/26/2017 - 13:27

Sometimes a new year brings a new way of thinking. When it comes to software integration, it’s time to stop focusing on connecting specific tools and start focusing on enabling collaboration, reporting, and traceability for all of the domains or silos in your organization. Connecting specific tools is a technical detail, but connecting silos is what drives real value for an organization. In this blog series, members of the Tasktop Solutions team will review several different domains of software development and point out how improvements can be made using integration.

IT Service Management (ITSM) is one such domain. It encompasses customers, services, quality, business needs, and cost. The goal of ITSM is to enable IT to manage all of these holistically. This helps optimize the consumption and delivery of the services provided by the IT organization. Many people view ITSM as the service desk, but it’s not just about tickets and support. ITSM relates to the overall management of the IT organization. Service desk is just one small piece. ITSM is typically operated within the IT team, applying one of the many frameworks that can help ensure success. ITIL (IT Infrastructure Library) is one of the most common frameworks, but there are others (like COBIT, ISO 20000, and SIAM)– all used for very specific purposes.

There are also many different ITSM and Service Desk tools available today. They can be generic or focused on one of the frameworks. A few examples are:

  • ServiceNow ServiceDesk
  • BMC Remedy
  • Cherwell Service Management
  • HPE Service Manager
  • Salesforce Service Cloud
  • Desk.com
  • Zendesk
  • Freshdesk
  • Atlassian JIRA Service Desk

The ITIL framework used in ITSM provides a library of processes that utilize a variety of functions (service desk is one) to help ensure that the design, implementation, management, and support of an organization’s IT services are developed and delivered optimally and in a controlled manner. Most organizations utilize only a few ITIL processes. Typically, they include:

Using these three processes, organizations can speed up delivery and guarantee that high-quality services are provided to customers. These processes also help ensure that issues are handled properly, categorized, and rolled out in a controlled manner.

The increased push to bring DevOps into ISTM has also created a pressing need for integration. Because integration helps the organization manage things closely even when a variety of teams are using a variety of tools (e.g. Agile tools like Atlassian JIRA or LeanKit). It also enables the organization to maintain traceability and ensure that quality services are being provided.

And integration is not just about connecting the tools. It’s also about connecting the teams involved in the work that is being tracked in these tools. ITSM is a holistic process that can touch all aspects of the software development process from support and IT professionals to developers and product managers. When looking to integrate with an ITSM tool in a DevOps world, the three main processes (incident, problem, and change management) are very complementary to the ways integration works best. Commonly, development teams require tight interaction with the IT organization in order to handle common patterns such as Help Desk Incident Escalation, Help Desk Problem Escalation, Help Desk Feature Request, and Known Defect Status Reporting to Help Desk.

To put this all together, incidents and problems originating in the ITSM tool can be escalated to the development team as a defect for resolution and to the testing team for verification. Once that defect is fixed, the development team can use their tool of choice to open a new change request, which will automatically be created in the ITSM tool, to deploy the fix to production. This integration results in seamless collaboration between the teams, within their tool of choice, while ensuring that traceability is maintained between these systems and the originating records.

Once all tools and teams are integrated as a part of the ITSM process, the delivery of changes is faster, more automated, and there is an enhanced level of traceability—so the organization knows what was required to repair a problem or complete a change request. This results in increased effectiveness and efficiency when it comes to the process and the product being delivered.

As companies grow, there is an increased need to look at “supply chain integration.” This is typically due to an increase in outsourcing IT services and a need for different organizations to work together. Integrating ITSM tools between 3rd parties can be a great way to ensure that information is transferred quickly between the systems and without error. This allows companies to work together seamlessly.

Pages