Subscribe to Tasktop Blog feed Tasktop Blog
Connecting the world of software delivery.
Updated: 56 min 13 sec ago

Tasktop Connect 2017: Why Now and Why Columbus?

Wed, 05/24/2017 - 07:29

Today we announced the inaugural Tasktop conference, Tasktop Connect 2017, which will take place on Wednesday 4th October 2017 in Columbus, Ohio. Hosted at the modern and stylish urban-venue VUE in Columbus’ historic brewery district, this dynamic event will provide a pertinent snapshot into state of software delivery.

Bringing together the shared experiences of IT transformation leaders and Agile, DevOps and Lean visionaries, this inspiring conference will provide attendees with tangible takeaways on how to optimize their software delivery and scale Agile and DevOps transformations.

The high-quality content program – a host of educational sessions from Tasktop customers and industry thought-leaders – is really beginning to take shape with Gene Kim and Carmen DeArdo already confirmed and we cannot wait to unveil further details in the coming months.

But first – why now?

This year saw the accumulation of everything we’ve tried to do at Tasktop come to fruition, with a host of milestones including:

The common theme across all these milestones is ‘customer success’. Tasktop and our customers are on a shared journey to change the way software is built and we wanted a dynamic platform that both celebrated our mutual achievements and laid the foundations for future growth. Our customers were clearly on the same page – we were inundated with questions about whether we were going to host our own event!

So we knew ‘why’ we wanted to host our own event – to continue to support and drive our customers’ digital transformations – but the next big question was, where? The answer was simple; Columbus.

But why Columbus?

For many of you who haven’t been to Columbus before, it may come as a surprise that Tasktop is hosting its first-ever user conference in the Ohio capital. You might even be saying, “Hang on, aren’t you based north of the border in Vancouver, BC? And aren’t your US headquarters in Austin, TX? Wouldn’t it make sense to host the event there?”

The answer is an emphatic “no!”. Columbus is the perfect representation of Tasktop’s burgeoning enterprise customer base and a natural location to gather our audience for our inaugural event. Tasktop boasts 43 of the Fortune 100 as customers, with 8 of those 42 headquartered and/or have significant presence in Ohio. Most of the others are spread across the Midwest and the Eastern US and Canada; providing an easy drive or short direct flight to Columbus.

The support that Tasktop and other software startups receive in Columbus shouldn’t be a surprise to the “locals”. The Columbus ecosystem is special. The Fortune 100s that call Columbus home are also very committed to supporting the local startup community, and while Tasktop isn’t a Columbus-based startup, the large Columbus enterprises have been instrumental to our success.

This quote by financial company Chase, who has a huge presence in Columbus, perfectly sums up the gravitational pull of the city: “With so much of the technology, change and leadership here in Columbus, we don’t go to New York, New York comes to us. And we’re proud of that.” And as an Ohio-kid, I couldn’t be happier for the event to be hosted in a place I call home.

For further information, please visit the Tasktop Connect website and if you have any further questions please don’t hesitate to contact us. We can’t wait to welcome you to what promises to be an energetic and rewarding day for everyone involved as we continue to connect the world of software delivery.

Automate Everything: Reflections on HPE Customer Forum, Dublin

Mon, 05/22/2017 - 13:35

Getting started with automation is a bit like investing – high risks with potential high rewards. Most DevOps transformations are set in motion in order to get things done better; “better” often meaning getting things done quicker and cheaper.

I attended the lifecycle and continuous tracks on Days 1 and 2, respectively, of the HPE Customer Forum in Dublin 2017. It was clear from the start that Kaizen is at the heart of DevOps; or, as Tal Levi Joseph, VP of R&D ADM at HPE, put it: “DevOps is an evolution, not a revolution”.

As a summary of the talks I compiled a diagram below (figure 1). The diagram depicts the common challenges and benefits of DevOps and thus does not accurately describe each DevOps transformation – I did hear someone had a reduction in speed as a result of applying DevOps practices (quite rare I say). Traditionally, all people want from DevOps is agility and speed. However the demands and realized benefits in the more recent DevOps adoptions focus on the right talent, scale, and quantity.

Figure 1: Summary of key learnings from talks at #HPEForumsDublin. The new, mature era of DevOps focuses on continual improvement, allows for failure, and plans for large-scale deployments from the start (not forgetting traditional goals such as agility). Automation and integration underpin the benefits – as well as the challenges – across the DevOps transformation.

So what about the risks? Seeing them more as healthy challenges, DevOps initiatives will expose flaws in processes and collaborations (or the lack thereof), within organizations. Toine Jenniskens from Rabobank highlighted the importance of automation while getting started with DevOps. Even if not all automated processes will work perfectly, they will still bring efficiencies, give more accurate outcomes without human error, and quickly indicate which parts of the automated process are broken. One of his mantras was “one function, one tool”. Indeed, many tools have been developed with one primary function in mind, and using best of breed solutions for each, is typically the best way forward.

In a different talk, Arne Luehrs from HPE described ChatOps as “anything that is not email and allows users to communicate in real-time”. A great example of an organization embracing a new method of communication, to share and explore information in a way that serves its users better. Now at HPE, they have over 4000 chatrooms dedicated to particular systems and configuration items with empowered teams able to collaboratively create their own rules of engagement for each conversation.

The above examples are really about reducing “time to market”: getting rid of boundaries of email, or automating tasks. But what else can you do? One idea that came up was to regard everything as code: infrastructure, data, best practice code; giving you the possibility to automate everything. A good example of this is when developers have access to all code written within the organization, and they can quickly find a package written by and reviewed by their peers. This not only saves time but adds much-needed resilience and lowers the risk of poor code in the developed solution.

To summarize the fantastic two days in Dublin in two words: embrace failure. This forces teams to re-establish an open culture where instead of looking for someone to blame teams are concentrating on working together with one goal in mind: team success.

Software Defined: The Business Impact of IT Automation using Infrastructure as Code

Tue, 05/16/2017 - 08:37

Last week we looked at Infrastructure as Code (IaC), an emerging set of IT automation tools and practices that enable infrastructure management through a software layer.

Today we will examine the impact of IaC on the business, using a “software-defined” lens to understand how this technology is a driver of change and transformation in the software supply chain. We will look at how Infrastructure as Code changes IT service management strategies, provides a foundation to launch DevOps initiatives and increase the scope of Agile practices, and demands a transformative approach to adoption in the enterprise.

Re-Defining IT

As a software-defined technology, we should immediately expect that Infrastructure as Code will be highly impactful on the business and how it is organized. It is important to recognize IaC as a software-defined technology, because looking at it through this lens gives us an understanding of what to expect as we start to adopt this technology in the enterprise. Analysis using three key attributes of software-definition will shed some light on the impacts:

1. Abstraction

Infrastructure as Code requires that all operations on infrastructure are declared in definition files and executed using an automation tool. This automation layer provides an abstraction from operations like deploying, configuring and managing components. This means that these operations shift left in the software supply chain – they are performed earlier and all together, rather than sequentially at the final stages of activity.

With this abstraction, the skill specialization for managing infrastructure shifts from traditional vendor and application specific sysadmin skills, to the ability to write code and think through the abstraction. The roles and responsibilities for managing infrastructure can move to anyone with a proficiency for writing code.

2. Control

Since infrastructure can be fully documented in code, we can “read” the environment – see everything that was deployed and how it was configured by simply reading the definition files.

The focus of service management and control systems therefore shifts to managing the automation tooling and definition files. For example, use of a version control system to manage the definition files brings about the idea of “versioned infrastructure”, where the change record is reflected in the version history.

Similarly, change management can be accomplished through code reviews performed individually when the changes are checked in, rather than putting batched changes before a change review board.

3. Mutability

The environment is exhaustively described in definition files, which are not dependent on infrastructure attributes. Ideally, the infrastructure itself is “immutable” – no changes are made directly to the infrastructure once it is deployed. Infrastructure components are locked down and not directly accessible to humans – changes are deployed only with the automation tooling.

This has two impacts: first, it introduces commoditization to deployment process. The same definition file can be used to bring up one server, or a hundred. Additional assets can be deployed as needed, just-in-time, and torn down when they are no longer required. Elastic assets means infrastructure is always ‘right-sized’.

Secondly, the underlying infrastructure becomes modular. We can use our definition file to bring up our components on-premise, or in the cloud; on AWS, or on Azure. While there are some dependencies involved in the automation tooling, overall the infrastructure layer has few enough hardware or vendor dependencies that operators have a lot more freedom to choose where to host their infrastructure.

Adoption Through Transformation

For a business that relies on a software supply chain to deliver value to customers, Infrastructure as Code represents a significant opportunity to increase efficiency, lower costs and reduce risk.

Further, IaC has emerged as a vital driver in transforming and modernizing the software supply chain. Digital-first businesses like Amazon, Netflix and Facebook live and breathe software-defined infrastructure. This is because they have been able to build their business, their culture and their value stream in a greenfield without the encumbrance of legacy systems and practices.

As with all software-defined technologies, incumbent enterprises that have a well-established value stream will have difficulty with wide-scale adoption of IaC. Some of the challenges they face include:

  • Culture: All software-defined technologies present the significant challenge of redefining roles, shifting responsibilities, and altering work structures. Part of the transformation to adopt these technologies therefore involves a cultural change across the technical side of the organization. DevOps, as a culture, has emerged from this, and embraces these new roles and responsibilities. This will need to be nurtured and allowed to grow through a process of sharing and collaborating.
  • Practices: IT professionals are typically accustomed to working within project based work structures like PMP and Prince2. Infrastructure as Code, however, begs for the use of software development practices like Agile to manage work. Implementing software-defined infrastructure will have the effect of proliferating management techniques like Scrum and Kanban. This is a great opportunity, but equally a challenge to get everyone on-board and trained up on the new methodologies and the tools they use.
  • Value: A fundamental characteristic of software-defined technologies is that they recast management strategies. With IaC, the IT supply chain is altered beyond recognition, requiring new thinking about service management strategies. The changes in where, when and how infrastructure will be managed and deployed mean organizations need to undergo a paradigm shift in thinking about how value delivery is organized.

In order to fully adopt software-defined infrastructure, incumbent enterprises will need to be ready to undergo a transformation. There needs to be a commitment and willing to invest in new tools, new skills and new relationships. Leaders will need to be open-minded and willing to run tests and experiments to determine how best to reengineer their value stream to capitalize on the opportunities ahead. The crisis of ITSM, if this can be called one, must be embraced as a catalyst for change.

Software-Defined: IT Automation using Infrastructure as Code

Thu, 05/04/2017 - 09:40

In my previous article, we looked at several examples of technologies that have become software-defined, and determined that adoption demands a significant shift in how a business organizes its value stream. We looked at some of the key concepts underscoring software-defined technologies and how they reshape the enterprise. By understanding those concepts, we can anticipate changes and better position the business to react when software-defined technologies emerge.

In this two-part article we will look at Infrastructure as Code (IaC), an emerging set of IT automation tools and practices that enables infrastructure management through a software-defined layer.

This week we will examine the role of IaC in the application lifecycle and look at how it is used in practice. Next week I will apply the core concepts of software-defined technology to understand the impacts of IaC on the enterprise value stream.

Applications: The Spoiled Children

All applications run inside what we call an “environment” – a stack of hardware and software components built to support the application. This stack includes: networking, storage, virtual machines, operating systems, databases, libraries, dependencies, and the application itself. Building an environment requires many activities to bring up that stack, provisioning and configuring each component according to the requirements of the application.

All of this is done to serve the application, which is like a badly spoiled child – always demanding that things be “just so”, throwing tantrums at even the most seemingly insignificant departure from expectations.

The processes used to get an environment ‘just right’ (and keep it that way) have been the subject of much analysis and design over the years, becoming part of the body of work known as IT Service Management (ITSM). Recently however we have seen the development of a new set of tools and practices used to create and manage environments, known as Infrastructure as Code (IaC).

Infrastructure as Code

Infrastructure as Code, also known as Programmable Infrastructure, involves the use of code and automation tools to perform the activities needed for building an environment. It replaces many of the processes involved in the deployment and ongoing management of the complete hardware-software environment in which an application will run.

While IT professionals have always used some automation such as scripting to help deploy environments, Infrastructure as Code is a recent development characterized by use of the following:

  1. Code – At the core of IaC is the code: definition files that declare the specification for each component of the environment and how it is configured. These files might be written in YAML or JSON, and will be checked into a version control system like Git.
  2. Automation tooling – Specialized tools read the definition files and use them to construct the environment and configure components according to specification.
  3. Application Programming Interfaces (APIs) – Automation tools perform the actions described in the definition files against APIs. Not only will the automation tools use APIs to provision and configure the components of the environment being managed, but the tool itself will be programmable through its own API.

The development of powerful automation tools, along with the widespread proliferation of APIs, has allowed Infrastructure as Code to emerge as a very effective means of managing IT operations processes.

Rather than working with GUI’s, scripts and command-line interfaces to perform actions, we are able to work with documents (code) that exhaustively describes the environment. These are easily shared, reviewed, and versioned. The actions at each step are executed, not performed, and are therefore much less prone to human error. Lets look more closely at what those steps are to understand how IaC is actually used.

Putting it to work 

While setting up an environment requires a number of different components and services, we can group these into three distinct steps:

  1. Provisioning – The first step is to provision the foundational infrastructure systems – servers, networks, databases, storage. Provisioning tools perform this task, and are usually supplied by the infrastructure vendor. For example, Amazon provides CloudFormation to create VPCs (networks) and spin up EC2 instances (Servers), and, likewise, Azure gives us Resource Manager to create Network Security Groups and bring up Virtual Machines. There are also some provisioning tools like Terraform that are vendor agnostic, making switching between infrastructure vendors easier.
  2. Configuration – The second step is to configure the provisioned components, and Configuration Management tools accomplish this task. This is a broader set of tools used to perform operations like transferring files, installing services, configuring settings, and so on. There are many tools in this space, but the “Big Three” are Puppet, Chef, and Ansible. Each has its own advantages and disadvantages, however they all accomplish the same goal – configure the components with the required dependencies and settings.
  3. Deployment – The third step is to deploy the application. More and more this involves the use of container technologies like Docker. Container technologies are a recent advancement in IT that deserve their own article and explanation, so, for now, suffice it to say that a Container allows an application and its dependencies to be wrapped up into a package that is easy to deploy into its own isolated space on a machine. Containers provide an additional layer of abstraction from the provisioning and configuration.

The tooling landscape used to perform these steps is highly fragmented, and strategies often use an opinionated, best-of-breed approach with a different tool at each of the three steps. For example, a team might use CloudFormation to set up the virtual machines and connect them to the network, Chef to configure and secure the virtual machines, and Docker to load the application into an isolated container.

There is no “correct” way to set up your automation stack – this will depend on the limitations of the tools and the needs of the organization – but it should be understood that adopting this technology will invariably change the way the organization manages IT work.

Anticipating Change

IaC presents us with a large and increasingly complex software-defined layer that is used to perform infrastructure management functions. It is important to note, however, that what becomes software-defined here is not the infrastructure. Rather, software-defined infrastructure is a pre-requisite for the use of IaC.

What becomes software-defined with Infrastructure as Code is the processes that are used to manage the infrastructure. This can create major challenges for traditional service management strategies – the roles, responsibilities, methods and practices that are involved with the management of infrastructure change considerably. It also holds great opportunities by providing a catalyst to launch DevOps initiatives and increase the scope of Agile practices across the value stream. By understanding IaC as a software-defined technology, we can gain insight into the impact on the enterprise.

In next week’s article we will examine IaC under the lens of software-definition, and look more closely at the challenges and opportunities of this driver of change and transformation in the software supply chain.

Announcing Gene Kim as Strategic Advisor, helping guide our Value Stream Integration Vision for DevOps

Thu, 04/27/2017 - 11:01

Ever since I started bumping into him at countless conferences, the learning from my meetings with Gene Kim just keeps accelerating.  I have never met anyone who can match Gene in channeling where the industry is at, seeing the vision for a better way, and bringing together the technologists and leaders who will get us there.

I am absolutely thrilled to announce that Gene is joining as a Strategic Advisor to Tasktop.  Gene and I share a passion for transforming how software is built, and for unlocking the $2.6 trillion of value creation that the world will see once companies connect the Value Streams within their organizations and across their software supply chains (see Introduction chapter of the DevOps Handbook).

At a recent All Hands meeting, I asked over 100 Tasktop staff to read the DevOps Handbook.  While I regularly update my reading list with suggestions for various departments and functions, I never thought I’d suggest a book for the entire company to read. I was not only blown away by the breadth and depth of the work, but also of how it summarizes some of the key learnings from the most important lean and management and software delivery books that I’ve read.  It gives us all the common vocabulary and foundations needed to collaborate on the next phase of software delivery.

With DevOps, Gene has spearheaded and nurtured a movement that is reshaping more the future of software delivery more than any other effort I know of right now.  Every industrial revolution needs a new kind of infrastructure, and I created Tasktop to become the company that provides every organization with the infrastructure necessary for software transformation and innovation.  I am thrilled that Gene will help guide Tasktop and our customers on this amazing journey, and help us with our mission of paving the path to turn the world’s most important organizations into the next generation of software innovators.

A Better Future With Intelligent Digitalization

Thu, 04/27/2017 - 09:42

Saving lives. This is the main goal of PA Consulting’s James Mucklow who is passionate about healthcare and technology. Listening to James, it is no wonder that his customer National Institute for Health Research (NIHR) is happy about their recent digital transformation. James considers digital culture as the first challenge to tackle. This, inherently, includes multi-disciplinary teams, embracing DevOps, and business appreciating the importance of IT. James’ claim that changing human behavior drives new operating models, logically follows the introduction.

Engage ESM, part of Atos, hosted a morning seminar on accelerating digital transformation, at the National Theatre. Engage ESM’s CTO Roderick De Guzman opened the event by talking about the importance of cloud and how to support it by developing and leveraging your existing ITSM organization; with a clear focus on the most-complained about factors of traditional IT: the speed and cost of IT.

The case study of NIHR was not one without its challenges. How do you collaborate with 3 million NHS staff, and implement a new service quickly? One of the solutions is what James’ team designed to help the 850,000+ people affected by dementia in the UK. The team created a site that matches people interested to take part in research and trials. The results of this solution were extraordinary, with time of recruitment dropping from months to weeks. Using the wider “going cloud” initiative, they were able to reduce operating costs by 50 %, increase productivity by 20 %, and achieve a 85 % first-time fix (FTF) rate on the self-service portal.

Other talks at the event included Chris Pope of ServiceNow, who talked enthusiastically about bots, machine learning and augmented reality (AR); possibly also inspired by the acquisition of DxContinuum by ServiceNow in December. For the more impatient, bots can give a much-desired fast response. “Bots are really just content that you can buy pre-packaged,” explained Chris. Bots can be used in e.g. 1st level support to answer queries from customers, or automatically route tickets based on their description.

According to Chris – and I would agree – the biggest problem with deploying a new AI (artificial intelligence) solution is when the problem is not understood, or proposing a solution too early. Chris gave an example of a mine that previously had a time-consuming, manual task allocation and management process, which transformed to using sensors to automatically identify valuable loads and further setting the parameters for post-processing units.

AI needs historical data – and patience. But, we need to be careful with how bots are trained; “supervised training” can easily lead to bias and discrimination1. Thus, as with any technology deployment, we should concentrate on the humans using the solution. Raising the awareness and unconscious bias amongst users of AI should be made a priority. When the true power of – any – technology is understood by its users, it will make the world a better place. And help us save lives.

References:

  1. Google blog: Equality of Opportunity in Machine Learning https://research.googleblog.com/2016/10/equality-of-opportunity-in-machine.html (accessed 26th April 2017)

Let’s Get Visual: Visualize Your Integration Landscape with Tasktop Integration Hub

Tue, 04/25/2017 - 06:21

On January 31st, Tasktop reimagined integration with the launch of Tasktop Integration Hub. Tasktop’s Model-Based Integration platform allows organizations to define their Value Stream Integration layer within the Tasktop interface automating information flow across teams, processes, and tools.

Today, Tasktop introduces an upgraded release of the Tasktop Integration Hub featuring Landscape View.

Landscape view provides a simple but dramatic visual overview of an enterprise’s entire software delivery value stream. This type of at-a-glance value stream overview is the first of its kind. IT allows users to quickly see which systems are integrated, what models are being used, whether the flows are one-way or two-way, and see which artifacts are flowing between tools (e.g. Stories, Defects, Requirements).

Using the landscape view, Tasktop administrators can visualize their entire integration landscape and filter the view by model or artifact within seconds.

On a macro level, administrators no longer need to explain integrations to CIOs via whiteboard diagrams. CIOs can see the entire value stream. And this holistic view helps them make critical business decisions faster.

For more information on Tasktop Integration Hub features visit our features webpage or request a demo today.

Editor’s Choice for Innovation: Tasktop Integration Hub

Thu, 03/30/2017 - 07:31

Before deploying Tasktop, most of our customers relied on a web of point-to-point integrations to connect their DevOps toolchain. Whether developed in-house or through third parties, the cost to create and maintain these integrations drained resources that are now being directed to delivering customer value. In a recent article, SearchSoftwareQuality Executive Editor Jan Stafford named Tasktop Integration Hub Editor’s Choice for Innovation and interviewed Tasktop customer TIAA.

Customer Perspective: Connecting the DevOps Toolchain at TIAA

As Relationship Technology Director for TIAA, Mark Wanish remembers managing tool integration before deploying Tasktop. “There was too much overhead to innovate, because it took so long to do point-to-point integrations. It got more and more cumbersome and costly,” he said.

Now that TIAA has automated integrations throughout the DevOps toolchain, the entire team has access to data across the value stream. “We can see what’s happening in each system without jumping around,” said Wanish. That means more time adding business value instead of jumping between different apps to find information or create reports for other team members.

Read the full article to learn more about why SearchSoftwareQuality named Tasktop Integration Hub Editor’s Choice for Innovation.

Four Things I Learned As A Project Manager (and none of them include Gantt charts!)

Wed, 03/29/2017 - 14:09

I joined Tasktop’s Product team last June after spending three years working as a Project Manager. Project Management as a field seems to get less respect than some disciplines in the world of technology, but there is one thing project managers are indisputably good at and that is making things happen.

As a project manager, your job is to ensure that contributors such as developers, business analysts, and others, are able to focus solely on their roles. The developer’s job is to code. So let them code! They shouldn’t have to worry about things like schedules, customer relationships, or resource allocation. By allowing each team member to focus only on their specific role, the team can function as efficiently as possible.

I was able to incorporate my project management skills into my new role at Tasktop by managing the release of our new product, Tasktop Integration Hub, across all departments of business. Though my current role extends outside of the project management world, I’ve been able to apply the lessons I’ve learned to a wide range of professional roles, and even to my personal life.

Here are the top 4 things I’ve learned from being a Project Manager (and none of them include Gantt charts):

Whenever making a request, no matter how small, assign an owner

I often see e-mails sent to an entire department containing a request. If a specific owner is not assigned, it’s easy for the request to get lost. Don’t assume that someone will answer your question just because it’s in an e-mail, especially if that e-mail is sent to more than one person! When a question is sent to multiple people, it’s easy for diffusion of responsibility to occur. Everyone assumes that ‘someone will take care of it,’ and your request will stagnate as a result.

Don’t say: “Could someone please schedule a meeting with the customer?”
Do say:Jane, could you please schedule a meeting with the customer?”

Assign deadlines to tasks

When making a request, add a deadline to it. If you aren’t sure what deadline to give, it’s totally fine to assign one (based on the information you have at the time). If you do have a tangible deadline (set by a customer, for example), make your internal deadline a few days earlier, so that you have time to troubleshoot any unforeseen issues that may arise.

Setting a deadline (even one that may change) helps keep things moving, and gives you a clear time at which you can follow up.

When setting a deadline, avoid using general terms like “asap.” What you think of as ‘soon,’ may be totally different from what someone else considers ‘soon.’

Don’t say: “Please get back to me asap on this request!”
Do say: “Could you send me a response by Wednesday morning?”

Provide context when making requests

When making a request, provide context. This will help the recipient of the request understand both the request itself as well as your requested deadline, which will make them more likely to fulfill it. When context is not included, a request can seem arbitrary or unreasonable, even if it isn’t.

Compare for example: “Please send me a status update on this feature asap – by Friday morning,” to “Could you please send me a status update on this feature by Friday morning? I have a meeting with the customer on Monday morning and want to make sure I have time to ask you any follow-up questions on Friday. That way I can present the status clearly to the customer and answer any questions they have.” Which request would you be more likely to fulfill?

Additionally, sometimes we think we know what we need when we make a request, but it turns out that we actually need something different. By including context, the recipient of the request can help redirect you in case some other solution would serve you better.

For example, a customer may ask for a new feature, such as the ability to create and execute custom scripts within your product. However, if they had provided context on what they truly wanted (perhaps the ability to query for specific artifacts), they could have learned that there is already an easy way for them to run queries right within the product’s UI, without any coding required. By including context, they are better able to find the ideal solution to their true goal.

Send a recap e-mail after a meeting is held, containing actionable tasks along with owners and deadlines

We’ve all experienced holding a meeting, feeling really good about what was discussed, and then three weeks later realizing nothing has come of it. To prevent this from happening, send out a recap e-mail after the meeting containing any actionable tasks that arose from the meeting. As we’ve already discussed, assigning owners and deadlines to the tasks will hold the team more accountable and make it easier to follow up.

Are there any strategies that have helped you become more effective in your role? Please comment below.

Software-Defined Technologies: Transforming the Value Stream

Thu, 03/23/2017 - 11:00

Software-defined is a concept that refers to the ability to control some or all of the functions of a system using software. The concept is sometimes incorrectly characterized as a buzzword or marketing jargon, when in fact it has a clear meaning that needs to be understood by organizations looking to keep pace with change.

When technologies become software-defined, there are major systemic benefits for organizations that use them, including lower costs, higher quality products and services, and less risk.

At the same time, software-defined technologies require major organizational changes for incumbent enterprises to adopt and use effectively. This often involves expensive and risky transformation projects that reengineer the value stream to take advantage of decoupled components, reduced dependancies and new management capabilities.

Today we will look at the origins of the “software-defined” concept and how its application presents both opportunities and challenges to the enterprise.

The beginning: ‘Software-defined Radio’

The software-defined concept comes to us from the evolution of radio transmission technology. A traditional radio communications system uses physically connected components that can only be modified through physical intervention. The antenna connects to the amplifier, which connects to the modulator, and so on. Operators are locked into the specifications of the components, the order in which they are connected, and whatever controls they expose. It’s an extremely inflexible technology and changes are best done by simply buying a new system.

As you can imagine, for businesses that operate large-scale radio deployments such as wireless telecom providers, technology decisions are hugely impactful. They can last decades and demand large upfront planning and capital costs. Keeping pace with change is extremely expensive and difficult.

Base Transceiver Station

In the mid-eighties however, researchers began to take specific components of the radio and make them digital, implementing functions like oscillators, mixers, amplifiers and filters by means of software on a computer. By emulating these functions in software, the system becomes adaptive and programmable, and can be configured according to the needs and requirements of the operator, rather than the specifications or the manufacturer.

In 1995, the term Software-Defined Radio (SDR) was coined to describe the commercialization of the first digital radio communication system, and this development changed the way these services and products can be delivered.

On the technical side, in becoming software-defined, many functional limitations are removed from radio systems. For example, by simply reprogramming the software, a device can have its frequency spectrum changed, allowing it to communicate with different devices and perform different functions. This has enabled a quick succession of technical advances that were previously the domain of theory and imagination, like ultrawideband transmission, adaptive signaling, cognitive radio and the end of the “near-far” problem.

On the business side, the changes are equally profound, having a significant impact on the value stream of enterprises throughout the wireless and radio industry, and the industry itself. A wireless telecom provider employing software-defined radio can easily add new features to its network, adapt its systems to take advantage of new spectrum bands, or reconfigure itself when a new handset technology like LTE 4G becomes available. A telecom provider able reconfigure its infrastructure by deploying updates to software rather than by buying new hardware can take advantage of huge operational savings while eliminating capital expenses.

SDR therefore provides significant strategic advantage to these businesses, introducing adaptability, modularity and agility to the organization where it was previously rigid and inflexible.

Taking advantage of SDR, however, is a long, transformational process, needing a lot of capital and a significant departure from the status quo. Not only does it require changing all infrastructure over to the new technology, but it also requires the business to think differently and reengineer the value chain to take advantage of the new capabilities.

Software-defined Infrastructure

The IT industry has also been deeply impacted by the advent of software-defined technologies. The following examples have created industries and enabled a generation of evolved products and services:

  • Hypervisors – A hypervisor is an operating system that runs virtual machines, like VMWare ESXi or Microsoft Hyper-V. It runs directly on the physical machine, abstracting and distributing the hardware resources to any number of virtual machines. This has undoubtedly been one of the largest and most impactful advances in IT in the last 20 years, ushering in the era of point and click server deployment and changing the way we manage and deliver IT services.
  • Software-defined Networking (SDN) – Traditionally, operating a network means managing lower level infrastructure that allows devices to connect, communicate with each other, and figure out where to send their packets. These switching devices – called “layer 2 devices” – each need to maintain their own state and configuration information, and make decisions about how to route packets based only on limited, locally available information. SDN abstracts layer 2 networking, and is the ‘secret sauce’ behind cloud computing – a critical functionality for all public cloud services including AWS, Azure and OpenStack-based providers. It allows the service provider to centralize routing and switching, and provides the orchestration capability required for large-scale multi-tenancy i.e. the ability to create and manage millions of logically isolated, secure networks.
  • Network-function virtualization (NFV) – Building upon SDN, NFV allows services like load balancers, firewalls, IDS, accelerators, and CDNs to be deployed and configured quickly and easily. Without NFV, to operate infrastructure at scale you would need a lot capital investment and an experienced team of highly specialized network engineers. NFV makes it easy to deploy, secure and manage these functions without having to understand the complexities underneath the hood.

“Software-defined” Defined

Having looked at where the concept came from and a few examples of modern software-defined technologies, I propose the following definition for what it means to be “software-defined”:

Software-defined means some or all of the functions of a system can be managed and controlled through software.

Some key attributes of a software-defined technology:

  1. The functions are abstracted
    • Software-definition strives to have stateless functions i.e. functions that do not maintain their configuration or state themselves. State and configuration information is maintained outside the function, i.e. in the software. By decoupling the state and configuration from the function and centralizing it, we gain adaptability, resilience, and the benefit of visibility at scale.
  2. Software controls functionality
    • No direct operator or human intervention is required for the function to operate – functions are managed solely through software. Management and administration are therefore decoupled from the function. We gain the ability to automate processes and activities, and manage the system independently from functional limitations.
  3. Functional components are modular
    • The software layer operates independently from any dependency on functional components. This means the functional components can be commoditized, modular and scalable. We can easily change or replace these components without disrupting the system.

 Adoption through Transformation

On the face of it, software-defined technologies are better-faster-stronger, and companies that use them will have a competitive advantage over those that do not. They lead to lower costs, higher quality and less risk for the business. Organizations building products and services that leverage these technologies can use them to disrupt incumbent enterprises.

For those enterprises, however, especially those locked in the middle of a legacy lifecycle, software-defined technologies present a significant challenge. Adoption requires rethinking the value stream and integrating with legacy systems. As stated in the book Lean Thinking,

“We are all born into a mental world of ‘functions’ and ‘departments,’ a commonsense conviction that activities ought to be grouped by type so they can be performed more efficiently and managed more easily” (p. 25)

Advances in technology has been able to provide solutions to problems, for large enterprises software-defined technologies are the problem. Not only do they present a threat to the business in the hands of startups, they also explicitly change the way functions and activities are organized, operated and manage. They demand rethinking how, where, when and by whom functions should operate in the value stream. They give startups the ability to disrupt entire industries while only creating waste for organizations that try to adopt them without first undergoing a transformation.

Not only do enterprises therefore face the challenge of developing competencies with new software-defined systems, they also face the challenge of changing their culture, reorganizing their team structures and reengineering their value stream. People in these organizations will need to be highly flexible and open to a paradigm shift in how they think about their work, their roles, and their activities in the value stream. Adoption needs to be driven through large-scale (and risky!) change and transformation projects.

In my next article, we will look at the next generation of business transformation projects – digital transformations – and see how the approach to deploying software-defined technologies in the value stream impacts success.

Putting Data Into Context: The Power of the Right Information at the Right Time

Tue, 03/21/2017 - 09:25
Part of a series of reviews on industry events, this blog talks about Tuuli’s reflections on Swiss Testing Day and the IBM CE event in March 2017.

Have you ever received an email and thought: “Why am I receiving this email? What’s the context?” It’s likely that someone had a discussion outside the email thread and with the best intentions, sent out an email to everyone they thought were affected by the topic of conversation, but forgot to reference relevant material in the email.

Receiving information without context is not only frustrating, but can sometimes have undesired implications. Let’s imagine someone had a war room discussion about a major incident that affected a CRM system, and for it to be fixed, you had to apply a fix from a vendor. Without knowing the context and thus the possible fix, the poor owner of the CRM system not only receives complaints from their team, but also customers, which often get escalated to management very quickly.

Last week I had the pleasure of attending Swiss Testing Day in Zurich and presenting at an IBM Continuous Engineering Symposium in London.

Though the subjects were fairly different, two clear themes emerged: the importance of data relationships and traceability. You can see how both of these derive from the same challenge of data context. Below I’ve broken down the two themes to explore these further:

Data Relationships

  • Helps understand connections.
  • Helps decide what action / non-action to take: Who needs to know? What is the importance of this information? Who is/ will be impacted? Who can I go to to ask more about this? What dependencies are there?
  • Julius Bär’s presentation mentioned this a number of times.

Traceability

  • Presenting information in light of historical data.
  • It helps answer difficult questions such as: What happened before this information was sought? What triggered it? How can we prevent it from happening again?
  • What is the predicted risk? What may happen in the future based on the history?
  • It not only helps with audits, but also helps understand the story behind the data.

And all of this context boils down to one thing: It allows you to make informed decisions. But remember, all the information presented is up to interpretation, and the action or lack of action is up to the human reading and interpreting the data.

In our first example of the broken CRM system, the system owner may have prevented the issue had they been informed about the known error and vendor fix beforehand. They could have then taken action without having to deal with the issue and customer satisfaction at the same time.

Tuuli with Tasktop partners Informetis AG and Tricentis at Swiss Testing Day 2017.

Value Steam Automation = Superior Software Delivery

Mon, 03/20/2017 - 11:16

Automation isn’t a new trend in IT, but its increasing influence continues to dramatically transform business-critical processes including software delivery. In a time where speed and productivity can make the difference between the failure and success of a project – and provide that all-important competitive edge – automation is crucial in vanquishing repetitive tasks that are slow, onerous and susceptible to human error.

The quality of software and the speed with which it can be delivered relies on the real-time flow of information between the people, teams and tools across the lifecycle. That’s why the most successful organizations are prioritizing value stream integration to automate the flow of information across their software lifecycle.

Without value stream automation, stakeholders are forced to use manual means of sharing this information such as exporting data from one tool and importing into another, manually re-keying this information or sharing it during wasteful status meetings. These tasks are time-consuming and a perpetual drain on productivity, as well as undermining the accuracy and quality of the end-product and/or service.

Fortunately, value stream automation eradicates these issues, ensuring the automated flow of information such as artifacts (i.e. defects, user stories, trouble tickets), as well as information from events (such as build features, version control changesets, security scan vulnerabilities and performance monitoring alerts).

By automating the value stream, you create a frictionless flow of information across a unified software development and delivery lifecycle that is seamless and waste-free, helping organizations to increase their teams’ effectiveness and empowering stakeholders by:

  • Automating handoffs across people, teams and tools
  • Providing real-time updates of status, comments, attachments etc.
  • Enhancing collaboration through in-context communication
  • Removing non-value added work, bottlenecks and wasted time
  • Increasing velocity, reduces errors and rework
  • Enabling automated traceability and visibility
  • Enjoying productivity-related savings up to $10 million (based on Tasktop calculations for a 1500-person software development and delivery team)

By integrating your software delivery value stream with Tasktop, you can transform how you plan, build and deliver software. We’ve done this for some of the most successful companies in financial services, healthcare, retail and automotive – including nearly half of the Fortune 100.

Speak to us today to see how you can integrate your value stream, automate the flow information and greatly improve your software delivery capabilities.

How Value Stream Visibility Enhances Software Delivery

Wed, 03/15/2017 - 11:26

A software development and delivery lifecycle typically comprises many people, teams, tools and disciplines. Often the data within these multiple components is siloed and visibility across the value stream is poor (or even non-existent). Ultimately this means the quality and speed of the software delivery suffers, IT projects fail and Agile and DevOps transformations struggle to scale.

None of the popular best-in-breed software tools provide automated traceability across the value stream from ideation to production, meaning critical activity data is only reported in those individual tools. As a result, IT leaders have a fractured view into the health of their software delivery, inhibiting them from detecting patterns, spotting inefficiencies or tackling bottlenecks.

If tools within the value stream are not integrated, then there is no end-to-end visibility into the evolution of an artifact as it moves through the lifecycle. The lack of a holistic overview means it’s very hard to ascertain how the artifact has evolved across the disciplines and tools being used in the project, so the context of the artifact and the semantic understanding is lost.

However with Tasktop, whenever any of the artifacts change in any of the connected tools, this activity data is streamed to a centralized database. From there, data can be manipulated and visualized using standard business intelligence tools by stakeholders involved in the lifecycle. This provides the basis of a comprehensive metrics and governance programs, including:

  • Visibility into the full lifecycle of software development and delivery from ideation to production
  • Data for real-time insight into the state of application delivery and value creation
  • Consolidated metrics and KPIs for management, optimization and transformation
  • Automated traceability across the entire lifecycle

The data helps organizations to:

  • Identify bottlenecks in the value stream
  • Automate the creation traceability reports for governance programs
  • Obtain a consolidated view of the status of application delivery
  • Merge application delivery metrics with financial reporting data to determine the true cost and benefits of IT initiatives

Tasktop provides this data and enables end-to-end visibility, traceability and more. For more information, see our brand new website and contact us today to discuss how we can give you complete visibility into your software lifecycle for optimized decision-making that will help you drive your Agile and DevOps transformations.

 

A Day Without Women

Mon, 03/13/2017 - 10:54

This past Wednesday was International Women’s Day. In conjunction, many women participated in a “Day Without Women” protest.

I know in the age of social media, we’ve already moved on to the next big thing. The articles are written, all the tweets are twitten. But I wanted to take a minute to give a guy’s opinion. Oh, and by the way, this is meant for the guys out there. Women, feel free to skip this post. You know this already.

It honestly feels a bit odd to write about this. As a guy, it’s easy for me to fall into one of three camps, 1) the good intentioned, but misguided mansplainer, 2) the troll asking why we don’t have a ‘Day Without Men’, or 3) the silent ally.

The first group feels like they’re doing good by jumping out in front of the movement and proclaiming what women should do. It’s hard to fault these guys, but it’s patronizing and implies that women don’t have the ability or autonomy to act and think independently. And while it may feel good, I’m not sure if it actually helps.

We can skip right over the second category.

I happen to think there’s a whole lot more to the third category than we’d like to admit. This group supports the Day Without Women cause. They’re 100% behind their colleagues striking and nod in agreement during happy hour when the issues of women’s rights and gender equality come up. These are ‘the good guys’, but these are the guys that don’t do anything. They’re not blocking the movement, but they’re not advancing it either.

Here’s the catch…I don’t want to belong to any of those groups.

I want to be more than that. The silent allies have no skin in the game. They have no voice.

Here’s my little chance to speak out. To take just a little risk by writing about what I saw. It’s not much, but it’s better than sitting on my butt doing nothing.

A Day Without Women..

I woke up Wednesday not realizing that there was a women’s strike about to happen. Only after checking my social media feed did I remember.

To give some background, my department consists of three men and three women (one of which is my boss). A few weeks ago, my boss told us she was participating in the strike and all of the women were encouraged to participate as well.

So Wednesday came, and while my boss did in fact take the day to protest, neither of the other two women did. They both had work responsibilities that needed to be attended to right away. One took part of the day, but the other worked a full day. Another women who used to be on our team was also working that day.

I came home and talked to my wife. She’s a Product Manager at another software company. It was a completely normal day at her office. All of the women were still working. Coincidentally, she had one meeting that consisted of all women. This is just one example of how vital women are at her company.

Do I think the women on my team didn’t take the day off because they’re overworked or put upon? No. I think they went to work because they know their contributions are important and they were needed at their jobs that day. But it’s what another Tasktop woman said to me that provided a fresh perspective.

She told me that she worked, not because she didn’t agree with the strike, but because she feels supported here. She feels that our company has been good to her, supports women’s equality and needed her that day.

Some of my women colleagues who worked that day, supported the strike in different ways—wrote blogs, refrained from using their purchasing power that day and/or contributed to organizations that fight for women’s rights.

It was interesting for me to hear. Multiple women at Tasktop with strong convictions about women’s equality taking different paths all leading to the same gender equality goal.

Can’t be done

Because I’m a 40 year old male, this type of social issue is not typically at the forefront of my thoughts. I thought we were pretty much past this. Obviously, I was wrong.  Before the International Women’s Day, my company sent out a request for employees to answer the question “Why do you feel having women at Tasktop and/or in STEM is important/positive?”  You can see some of the replies in the subsequent blog post The Importance of Women in STEM. When I opened that email, I’ll be honest, I thought it was a bit silly. Why? Because I couldn’t imagine that anyone wouldn’t know that women are a valuable part of the workforce.  Silly because I couldn’t believe that there are people out there who think the US economy could survive if the workforce reverted back to what it looked like in the 1950’s.

The simple answer is that Tasktop would not be as successful without women. And it’s not because they’re women. It’s because they’re smart, talented, and driven people. It’s because they’re the right people for the job. Full stop.

It seems to me that limiting yourself and your company to half the workforce, half the world, is simply a very bad idea. Women at Tasktop have designed our product, they’ve built our product, they’ve marketed our product and they’ve sold our product.

So while I didn’t march on March 8th, or take the day off, I’ll be doing my best from now on to be more than a silent ally.

The Importance of Women in STEM

Wed, 03/08/2017 - 08:36

On March 8th, Tasktop joins organizations and individuals around the world in support of International Women’s Day. International Women’s Day is celebrated globally to bring together women, men, and non-binary people to lead within our own spheres of influence by raising awareness and taking action to accelerate gender parity.

Said best by Tasktop VP of Product Management, Nicole Bryan, in her blog, Role Model Ladders: A Concrete Path to Getting More Women in Technology:

“Through small but intentional steps, we can change things.”
At Tasktop, 35% of employees and 40% of Tasktop’s Management team are female. Not only are we changing the world of software development and delivery, we’re doing it within an environment of diversity and inclusivity. Our teams recognize the importance of diversity within the workplace and outside of the workplace.

Tasktop President, Neelan Choksi, serves as a trustee at TechGirlz, a non-profit working to get adolescent girls excited about technology. In addition, the Tasktop engineering team is spearheading a Technovation Challenge designed to help give girls around the world the opportunity to learn the skills they need to emerge as technology entrepreneurs and leaders.

As current leaders in technology experiencing the benefits of diversity firsthand, many Tasktop employees shared their thoughts on why having women at Tasktop and in STEM is important:

“From co-founder to every team and level of management, Tasktop has thrived by actively seeking and fostering an environment for women in tech.  The diversity in thinking and problem solving produces better innovations and better business results.  As high tech businesses continue to become more complex and more creative, the companies that are enlightened to this will outperform those who are stuck in the stone age of boys’ clubs.”
Dr. Mik Kersten, Tasktop Co-Founder and CEO |

“Having diverse teams at Tasktop enables a collaborative environment where voices with different experiences combine to create software that considers a problem from multiple perspectives. And it is a lot more fun to work in a diverse workplace!”
Gail Murphy, Tasktop Co-Founder and Chief Scientist |  

The more ideas you have to choose from, the better the result will be.  Each person with a different background, a different outlook, a different approach to life brings different ideas to the table.  Diversity drives innovation by starting with more points of view represented.”
Dawn Baikie, Tasktop Senior Software Engineer |  

“You wouldn’t voluntarily walk through life with one arm tied behind your back, would you? Regardless of who you are, we all have our own experiences inside and outside of work, and it’s these individual experiences that provide diversity of thought, sow the seeds of creativity and drive innovative ideas. Tasktop, STEM and wider society are all greatly enriched by the input, output and presence of many talented women, many of whom I’m proud to call my colleagues, friends and family.”
Patrick Anderson, Tasktop Content Specialist |  

“For the first time in my career, I’m on a team with a majority of women. Even though gender does not matter – I happily work with both women and men – it’s great to work for a company who promotes equality (be that gender, race, background, disability, or other). Where [gender] equality is known to correlate with income, education, political empowerment, and health, I can’t help but think that diversity and equality within an organization results in stronger financial results for companies.”
Tuuli Bell, Tasktop Partner Account Manager, EMEA |  

“Tasktop has women working in technical roles across the company. Seeing the work they do has broadened my conception of what it means to be a woman working in STEM. I realize that no longer is it just about coding (though that is important too). My experience has shown that all levels and teams from a company benefit from women who exercise their analytical and critical thinking skills and combine them with their other unique abilities.”
Cynthia Mancha, Tasktop Product Manager | 

“It is important to me to have women at Tasktop and in STEM because I want my kids to think of gender parity as we think of women’s suffrage, something that’s not to strive for, but something you are confused about how it could have ever been in question.”
Thomas Ehrnhoefer, Tasktop Senior Software Engineer | 

“Diversity is a major driver of creativity and innovation. Years ago women were criticized for not behaving more like men in business. Now we know that embracing the differences in the way we think and make decisions drives innovation and business success.”
Joyce Bartlett, Tasktop Marketing Director |  

Women represent nearly half of the workforce in the US today, but only a quarter of the jobs in STEM. The more we encourage women’s interest and passion​ in the fields of ​science, engineering, math and technology, the more we will see our perspectives and needs represented in society.  To be an agent of change is help others visualize what’s possible.”
Emily Kelsey, Tasktop Regional Sales Manager |  

“Because it’s silly we still have to have this conversation. Women are half the workforce. Not only is it the good & right thing to do, it’s a competitive advantage. Do it to be a better company.”
Trevor Bruner, Tasktop Product Manager |  

“There is rarely a week that goes by where I don’t contemplate how fortunate I am to be a Tasktopian. It’s not just that the company is doing interesting and important things, but it’s how we do it. Earlier this week, our CEO (Mik Kersten) made special effort to remind his team that Tasktop’s culture is a place where gender equality and justice run deep. He did so in support of those within the company that felt they wanted to support the “A Day Without a Women” initiative. At Tasktop, I am fortunate to work in an inclusive environment where diversity is a hallmark; where it’s recognized that the diversity in our thoughts, coupled with unity in action, leads to a stronger, more innovative company. “
Betty Zakheim, Tasktop VP of Industry Strategy |  

“It is important to have women as part of an IT organization to provide a range of critical thinking skills to their male counterparts. Including women creates a 360-degree view of challenges and possible solutions for a world that increasingly requires creative problem solving.”
Beth Beese, Tasktop Business Development Manager |  

Having different perspectives and ideas on the table is essential to the process of good software design.  We rely heavily on diversity within the team for those ideas and perspectives.  Women bring a unique perspective that when combined with other types of diversity ultimately leads to better design decisions and better software.”
David Green, Tasktop VP of Architecture |  

Women pursuing careers in STEM don’t always have it easy, but at Tasktop we’re striving to pave the way for the future women of STEM. One of the many reasons I’m proud to be a Tasktop employee.

Hey Guys… While the Women Strike, Consider This…

Wed, 03/08/2017 - 06:45

I believe that Women’s International Day has special significance this year. With the heightened political atmosphere charged with undertones of misogyny, the Women’s March shining a light on women’s rights and voice, and high profile news stories about sexual harassment, it just seems like Women’s International Day should be particularly meaningful and hopefully memorable in 2017.

So on this day where many women are striking to help bring attention to the value of women to our economy and our culture, I have a challenge for the men: be part of the conversation about women’s rights and social justice.  The fight for women’s rights and equality can only be won if it is not just women having the conversation.  We need men to play an active role in change.  And that means bringing men into the fold to talk about and consider why it is so important to have women in the workplace – treated and paid equally.   

I think it is apropos as men are at work looking around at empty desks, or, worse yet, looking around at not empty desks because your company is mostly male, to take a brief moment to consider why you value women in the workplace, and see them as equals, then share that story with your wife or partner or female friends.  Or grab a couple of your colleagues, gather around the water cooler and tell a few good stories about why you think having women as equal participants in our economy and workplaces is better for you, better for your company and better for the world.

To that end, at Tasktop, our CEO, Sr. Director of Business Development, Sr. Director of Engineering and Sr. Director of Technical Solutions have all contributed their thoughts as to why Tasktop is a better place to work and produces better work output because we value diversity and constantly strive to attract and retain women in the workplace.

Mik Kersten, Tasktop CEO: “From co-founder to every team and level of management, Tasktop has thrived by actively seeking and fostering an environment for women in tech.  The diversity in thinking and problem solving that results produces better innovations and better business results.  As high tech businesses continue to become more complex and more creative, the companies that are enlightened to this will outperform those who are stuck in the stone age of the boys’ clubs.”

Wesley Coehlo, Tasktop Sr. Director of Business Development: “Tasktop is absolutely a more successful business because women are involved. 75% of Tasktop’s Business Development team members are women. Because of their contributions we have been able to drive substantially higher revenue and become the most widely adopted integration technology for SDLC application vendors.”

Lucas Panjer, Tasktop Sr. Director of Engineering: “Tasktop is a more successful business because women are involved. (It’s really that simple). It’s the diversity, the perspective, and experiences that are brought to bear in design, decision making, strategy and execution. These things are concrete, important, and contribute to a better overall culture, product, and business. However, I would argue, these are less impactful reasons and that the impact of women is far bigger and simpler to explain. We’re not making to most of our world, society, and personal and professional opportunities if women, and any under-represented groups, aren’t fully present, participating, and at the fore. If women aren’t fully here, we’ve lost out on potential, and created a huge opportunity cost for ourselves, this company, and society as a whole.”

Shawn Minto, Tasktop Sr. Director of Technology Services: “Tasktop successfully supports our customers because of the dedication of the women involved. Whether it be working tirelessly with a customer to troubleshoot and solve a problem whilst building a strong relationship, understanding a new tool and it’s use in its entirety or working through complex legals to complete a sale, the women of Tasktop use their intelligence and skills to excel at their tasks. Through their perseverance, passion and commitment to everything that they do, they are integral to our success.”

These thoughts, coming from our male colleagues, make me extremely proud to be part of the Tasktop team and hopefully they will inspire other men to come forward and be a champion for women. It’s really being a champion for all.

Defect Management – Process Instead of Tracking

Tue, 03/07/2017 - 10:19

As software development continues to evolve, we need to reconsider how we manage defects. In the past, defect management focused merely on documentation and fixing the issues discovered. Today that is simply not enough, with modern Agile and highly integrated toolchains rendering the process ineffective.

Now we need to establish a process that tracks defects over the entire tool stack and use all possible information to improve the software development lifecycle. To achieve this, the process should have the following main goals:

  • Prevent defects (main goal)
  • Process should be risk driven
  • Measurement should be integrated into the development process and used by the whole team to improve process
  • Capture and analysis of the information should be automated as much as possible
  • Many defects are caused by an imperfect process – the process should be customizable based on the conclusions of the collected information

To reach these goals, one can take the following steps:

Defect Prevention – Standard processes and methodology help to reduce the risks of defects

Deliverable Baseline – Milestones should be defined where deliverable parts will be completed and ready for future work. Errors in a deliverable are not a defect until the deliverable is baselined

Defect Discovery – Every identified defect must be reported. A defect is only discovered when the development team has accepted the reported issue and it has been documented

Defect Resolution – The development team prioritizes, schedules and fixes the defect based on the risk and business impact of the issue. This step also includes the documentation and verification of the resolution

Process Improvement – Based on the collected information the processes should be identified and analyzed where this defect resulted to improve the process and prevent future similar defects.

Management Reporting – For all steps, it should be possible to use all collected information to allow reporting to assist with project management, process improvement and risk management.

When creating a defect management process that is right for your organization, it is also important to consider the stakeholders and the type of assets and artifacts involved.

Stakeholders

The defect management process involves many different stakeholders and they must be taken into consideration when developing an effective defect management system. Let’s consider the flow of information.

The author creates or reports the defect to the development team. Based on where the defect was identified, the authors could be developers, testers or members of the support team.

These people could also be consumers of the defect. Developers must verify, fix and document the resolution for the identified defect. Testers use the information to create new test definition based on the found defect and verify if the resolution solves the problem. The support team can use the information to deliver possible workarounds and clarify the reported issue if they are already reported as defects.

In smaller teams the developer could also be a contributor. In larger teams, the developer manager holds this role to prioritize, schedule and assign the defects that have been created. Another consumer could be the executives or management as they use the information for reports to gain insight and improve the development processes.

Artifacts and Assets

The main assets for the defect management are error reports with a description of the problem which should include detailed information to reproduce the issue. To reproduce the issue, screenshots or screen capture videos can help with this process. Log files, especially with detailed tracing and stack traces, are an important source for the development team to identify the defect. In the most agile or application lifecycle management systems, a defect can tracked and documented as artifact (such as an issue, defect, problem or bug).

How Tasktop can improve your defect management

In today’s integrated software development lifecycle, stakeholders use different types tools to fit their needs and defects need to be created and tracked in each of these systems. To prevent a lag in communication and loss of information, it’s possible to integrate the different tools with Tasktop and have an automated information flow across the whole tool stack.

Some common integration patterns are:

Developer-Tester Alignment – Defects can be synchronized into testing tools for creating tests based on the defect.  Additionally, testers can create defects in their favorite tool and can synchronize back in the developer tool for quick and easy resolution

Help Desk Integration – Support can create a defect which is synchronized based on a reported problem, allowing support to track the status of the defect. Furthermore, it is possible to use the information from existing defects to create a knowledgebase with known issues and workarounds

Security Issue Tracking – Security violations discovered by an application security tool are synchronized as defect for resolution

Supply Chain Integration –Defects can be synchronized in the quality assurance process to a contractor or 3rd party supplier for quick resolution

Consolidated Reporting – All defect information can be aggregated and consolidated to create reports for optimization of the defect management process

For further information on defect management and how Tasktop can help by integrating your software value stream, please visit our website and contact us today.

Test Management – An Integration Opportunity

Thu, 02/23/2017 - 13:38

Test management is the practice of managing, running and recording the results of a potentially complicated suite of automated and manual tests. Test management also provides visibility, traceability and control of the testing process to deliver high quality software, quickly.

Tools for test management are generally used to plan and manage tests, test runs and gather execution data from automated tests.  Additionally, they can typically manage multiple environments and provide a simple way to enter information about defects discovered during the process.

When we explore how organizations manage their software testing, it becomes clear how an integrated software toolchain greatly improves test management. This benefit becomes particularly clear when we consider how a connected workflow supports the stakeholders and the flow of assets and artifacts in the process and the common integration examples you encounter.

Stakeholders
There are several stakeholders involved in Test Management process:

  • Testers: consume requirements to create and execute test cases.
  • QA Managers: contribute to prioritization and high level planning of tests.
  • Developers: contribute to building the software and fixing defects found by testers.
  • Product Managers: define the requirements to be tested and determine release readiness.

Assets and Artifacts
Common assets used by Test Management tools are test plans, automated test scripts (code) and automated test frameworks (set up, tear down and result files). The most common artifacts used and produced by Test Management tools are test executions, test cases, test configurations, test sets, test instances, requirements and defects.

Integration Scenarios
Some common integration patterns used in Test Management process: Developer-Tester Alignment, Requirements Management-Test Planning Alignment, and Test Alignment.

  • Developer-Tester Alignment: defects generated by developers are synchronized into a Test Management tool so tests can be written against them to prevent regressions, and defects generated by testers are synchronized into development tools so that defects can be resolved.
  • Requirements Management-Test Planning Alignment: requirements generated by a Business Analyst in an Agile tool are synchronized into Test Management tool so that tests can be written against them in parallel to any development efforts.
  • Test Alignment: tests are generated by Agile team members to validate user stories during the Sprint. Tests are synchronized to Test Management tool so centralized testing organization can add additional details and automate the test as needed.

Integration Example
One popular test automation suite is Selenium. Selenium allows the organization to develop an extensive test suite for web-based products. One integration opportunity that is particular interesting is to capture the failures of test cases that have been run by Selenium in the test management tool (e.g. HPE ALM) and appropriately kick off development work for the issues. Therefore, the process is to inform the development and QA teams of issues that need to be attended to.

  • A QA team uses Selenium for automated testing of their web application and HPE ALM for test management, while the development team uses HPE ALM to resolve any defects found by Selenium.
  • When Selenium detects a failure in its testing, failures should be recorded and submitted as defects. Test case results should also be linked to their original test cases in the test management tool (HPE ALM in this case).

This integration that can extend the automation across the lifecycle using Tasktop Integration Hub, which automatically creates a new defect in HPE ALM for prioritization and resolution, extending the automation benefits to take advantage of automated tests, as well as the enterprise tool of choice for test management effectively.

To learn more about how Tasktop integrates your software value stream and improves your test management capability, visit our website and contact us today.

Strengthening Application Security in the Software Development Lifecycle

Tue, 02/21/2017 - 13:32

As software continues to pervade our lives, the security of that software continues to grow in importance. We need to keep private data private. We need to protect financial transactions and records. We need to protect online services from infiltration and attack.

We can obtain this protection through ‘Application Security’, which is all about building and delivering software that is safe and secure. And developing software within an integrated toolchain can greatly enhance security.

What’s application security?

Application Security encompasses activities such as:

  • Analyzing and testing software for security vulnerabilities
  • Managing and fixing vulnerabilities
  • Ensuring compliance with security standards
  • Reporting security statistics and metrics

There are several different categories of these tools, however the below are the most interesting in terms of software integration:

  • Static Application Security Testing (SAST) – used to analyze an application for security vulnerabilities without running it. This is accomplished by analyzing the application’s source code, byte code, and/or binaries for common patterns and indications of vulnerabilities.
  • Dynamic Application Security Testing (DAST) – analyze a running application for security vulnerabilities. They do this by automatically testing the running application against common exploits. This is similar to penetration testing (pen testing), but it is fully automated
  • Security Requirements tools – used for defining, prioritizing, and managing security requirements. These tools take the approach of introducing security directly into the software development lifecycle as specific requirements. Some of these tools can automatically generate security requirements based on rules and common security issues in a specified domain.

Other categories of Application Security tools, such as Web Application Firewalls (WAFs) and Runtime Application Self-Protection (RASP) tools, are more focussed on managing and defending against known security vulnerabilities in deployed software, and are somewhat less interesting for integration.

There are many vendors of Application Security tools. Some of the most popular are: Whitehat, who makes SAST and DAST tools; IBM, whose AppScan suite includes several SAST and DAST tools; SD Elements, who makes Security Requirements tools; HPE, whose Fortify suite includes SAST, DAST, and RASP tools; Veracode, who produces SAST and DAST tools; and Checkmarx, offering a source code analysis SAST tool. 

How is software integration relevant to application security?

When looking to integrate new tools into your software delivery process, it is important to first identify the stakeholders of those tools, and the assets consumed by and artifacts produced by those tools.

The most common stakeholders of Application Security tools are:

  • Security Professionals: write security requirements, prioritize vulnerabilities, configure rules for SAST and DAST tools, and consume security statistics, metrics, and compliance reports
  • Developers: implement security requirements in the software they are building, and fix vulnerabilities reported by SAST and DAST tools
  • Testers: create and execute manual security test plans based on security requirements
  • Managers: consume high level security reports, with a focus on the business and financial benefits of security efforts.

Common assets consumed by Application Security tools include:

  • Source code
  • Byte code
  • Binaries
  • Security rules

Common artifacts produced by Application Security include:

  • Vulnerabilities
  • Suggested fixes
  • Security requirments
  • Security statistics and metrics

With so many people and assets involved in the workflow, we need all stakeholders to be able to trace artifacts, spot vulnerabilities and have automated reporting to be able to address any issues as they arise. An integrated workflow does this, as illustrated in the below workflow.

Common integration scenarios

The three Software Lifecycle Integration (SLI) patterns we’ll be looking at are Requirements Traceability, Security Vulnerabilities to Development, and the Consolidated Reporting Unification Pattern.

  • Requirements Traceability: the goal is to be able to trace each code change all the way back up to the original requirement. When it comes to Application Security, we want security requirements to be included in this traceability graph. To accomplish this we need to link requirements generated and managed by Security Requirements tools into the Project and Portfolio Management (PPM), Requirements Management, and/or Agile tools where we manage other requirements and user stories. We can currently do this with a Gateway integration in Tasktop Integration Hub, by adding a Gateway collection that accepts requirements from our Security Requirements tool and creates matching requirements or user stories in our PPM, Requirements Management, or Agile tool.
  • Security Vulnerabilities to Development: this is about automatically reporting security vulnerabilities to our development teams to quickly fix them. To accomplish this we need to link vulnerabilities reported by SAST and DAST tools into our Defects Management or Agile tools, where developers will see them and work on a fix. We can currently do this with a Gateway integration in Tasktop Integration Hub, by adding a Gateway collection that accepts vulnerabilities from SAST and DAST tools and creates matching defects in our Defects Management or Agile tool.
  • Consolidated Reporting Unification Pattern aims to consolidate development data from the various tools used by teams across an organization so that unified reports can be generated. When it comes to Application Security, we want data about security requirements and vulnerabilities included so that it can be reported on too. We need to collect these artifacts produced by our Application Security tools into our data warehouse. We can currently accomplish this with a Gateway Data integration in Tasktop Integration Hub, by creating a Gateway collection that accepts security requirements and vulnerabilities from our various Application Security tools and flows them into a common Data collection.

For further information on how Tasktop integrates your software value stream and enhances Application Security, visit our website and contact us today.

Key Lessons From A Big Software Product Launch

Thu, 02/16/2017 - 14:14

Last month was a seminal moment for us – we launched our next-generation software integration product, Tasktop. As ever, the product development journey was one hell of a ride.

Three years. 500,000 lines of code. 20,000 automated tests. 5,000 wiki pages. Hundreds of design sessions. Many mistakes. Some tears. A few moments of deep soul searching. And many days filled with tremendous pride watching a team pull together to deliver something special – something that we truly believe will transform the way people think about integration.

In true Agile style, I’m a big believer in retrospection, ascertaining key lessons and gleaning takeaways from the experience to improve the way we work. So what did we learn this time round?

It’s ALL about the people and trust.

To combine the powers of talented individuals and turn them into a true team, you need trust. All of our team will admit there were some rocky moments at the beginning and that’s only natural. Yet with hard work and perseverance, you can forge a close powerful unit that runs like a well-oiled machine.

Trust that the product manager and designers have fastidiously analyzed what the customers want and are requesting an accurate representation of their needs. And trust that architects and developers are designing a codebase and architecture that can be built on (while at the same time being nimble and lightweight as possible).

If I had a ‘magic button’ (everyone at Tasktop knows my obsession with magic buttons!), it would be the ‘trust’ button. Of course that is not possible – trust is built up over time and can’t be rushed – but once you’ve got it, man, is it an addiction!

It takes a village.

Building a pioneering software product isn’t all about the developers (although they’re obviously integral). To get the best result possible, you need:

  • Strong user-focused product managers
  • Imaginative and creative user experience designers
  • QA professionals that see the BIG picture (as well as thousands of details)
  • Technical writers willing to rethink documentation from the ground up

Throw in sales and marketing into mix and the village becomes more of a city by the end. Embrace it, take everyone in and watch your product development flourish in this environment.

Don’t give up and don’t give in.

Set a vision and DEMAND a relentless pursuit of that vision. When it seems like everything is being called into question, reach deep inside and stick to your core vision. It’s your constant, your north star.

Now, this doesn’t mean that you can’t alter and tweak things along the way – in fact, I would say if you don’t do a good amount of that you are heading for potential disaster. But if you don’t believe in the core vision that was set, then you will lose your way.

Have urgency but at the same time patience.

There is a somewhat elusive proper balance of patience and urgency. If I had another magic button I would use it for this purpose…but since I don’t, I think your best bet is to trust your gut to know when to push, and when to step back and let things steep.

Laugh a little. Or a lot.

I treasure the moments during the course of building Tasktop where we were laughing so hard that we cried. The thing I love is that I can’t even remember many of the funny moments that we shared – there were too many. And, yes, there were also a not insignificant number of moments where there was frustration and downright anger. But those memories aren’t what stick – what sticks are the moments where we overcame the hurdle, pulled together and laughed at ourselves.

Be grateful for those who support you.

Last but definitely not least, appreciate and thank the people that made the vision come to life. That doesn’t just include the direct teams that were involved, but also those who support you outside of work such as your friends and families.

The family that puts up with 28 trips to Vancouver in five years. The family that lives and breathes the ups and downs with you. The family that wants to see this product succeed almost more than you do!

To that end, I would like to thank my family; my husband, my son and my daughter – I thank all of you for putting up with the craziness of the last three years! If only the walls could talk… but instead, my 10 year old daughter decided to write down her own thoughts a few weeks before the launch:

“3,2,1…BLASTOFF!!!!!! This launch is all my mom has talked about (and the election) for the past 3 months. How much she has been talking about it shows that this launch must be really important. You should get the front page on the newspaper – which if you haven’t read since the internet came out I don’t blame you.

To be frank, I actually don’t know what the big product is supposed to be, but from past experience, Mommy’s team gets all upset when a product doesn’t work. Also, another benefit of getting this thing to work is that everybody will be super happy and joyful.

But I will say, whoever scheduled the timing of her big trip to Vancouver for the launch must not have realized that the big trip almost makes me not see my mom for two weeks because I am going to Hawaii (yes, my parents are that awesome they are letting me go to Hawaii for a week as a 10th birthday present).

But, of course, don’t let that stop you from making this Tasktop’s best product yet. Make the product, make it work, and make it the most awesome thing the world has ever seen.

“Tasktop, the most empowering force ever!” I can see it in those big letters on the front page. Yes, I am waiting for the day I see those exact words marching bravely across the front page of the newspaper. So, don’t just stand there, get up and show the amazing, futuristic, and wonderful world of Tasktop.”

– Bailey Hall, one Tasktop’s youngest and brightest thought leaders.

I’d like to thank everyone involved in making the launch of Tasktop a success as we move on the next significant stage in the product’s development – getting it to market and harnessing its many capabilities to drive our customers’ large-scale Agile and DevOps transformations.

For more info on the new product, check out our new site www.tasktop.com

Pages