Subscribe to Tasktop Blog feed Tasktop Blog
Connecting the world of software delivery.
Updated: 25 min 7 sec ago

Strengthening Application Security in the Software Development Lifecycle

Tue, 02/21/2017 - 13:32

As software continues to pervade our lives, the security of that software continues to grow in importance. We need to keep private data private. We need to protect financial transactions and records. We need to protect online services from infiltration and attack.

We can obtain this protection through ‘Application Security’, which is all about building and delivering software that is safe and secure. And developing software within an integrated toolchain can greatly enhance security.

What’s application security?

Application Security encompasses activities such as:

  • Analyzing and testing software for security vulnerabilities
  • Managing and fixing vulnerabilities
  • Ensuring compliance with security standards
  • Reporting security statistics and metrics

There are several different categories of these tools, however the below are the most interesting in terms of software integration:

  • Static Application Security Testing (SAST) – used to analyze an application for security vulnerabilities without running it. This is accomplished by analyzing the application’s source code, byte code, and/or binaries for common patterns and indications of vulnerabilities.
  • Dynamic Application Security Testing (DAST) – analyze a running application for security vulnerabilities. They do this by automatically testing the running application against common exploits. This is similar to penetration testing (pen testing), but it is fully automated
  • Security Requirements tools – used for defining, prioritizing, and managing security requirements. These tools take the approach of introducing security directly into the software development lifecycle as specific requirements. Some of these tools can automatically generate security requirements based on rules and common security issues in a specified domain.

Other categories of Application Security tools, such as Web Application Firewalls (WAFs) and Runtime Application Self-Protection (RASP) tools, are more focussed on managing and defending against known security vulnerabilities in deployed software, and are somewhat less interesting for integration.

There are many vendors of Application Security tools. Some of the most popular are: Whitehat, who makes SAST and DAST tools; IBM, whose AppScan suite includes several SAST and DAST tools; SD Elements, who makes Security Requirements tools; HPE, whose Fortify suite includes SAST, DAST, and RASP tools; Veracode, who produces SAST and DAST tools; and Checkmarx, offering a source code analysis SAST tool. 

How is software integration relevant to application security?

When looking to integrate new tools into your software delivery process, it is important to first identify the stakeholders of those tools, and the assets consumed by and artifacts produced by those tools.

The most common stakeholders of Application Security tools are:

  • Security Professionals: write security requirements, prioritize vulnerabilities, configure rules for SAST and DAST tools, and consume security statistics, metrics, and compliance reports
  • Developers: implement security requirements in the software they are building, and fix vulnerabilities reported by SAST and DAST tools
  • Testers: create and execute manual security test plans based on security requirements
  • Managers: consume high level security reports, with a focus on the business and financial benefits of security efforts.

Common assets consumed by Application Security tools include:

  • Source code
  • Byte code
  • Binaries
  • Security rules

Common artifacts produced by Application Security include:

  • Vulnerabilities
  • Suggested fixes
  • Security requirments
  • Security statistics and metrics

With so many people and assets involved in the workflow, we need all stakeholders to be able to trace artifacts, spot vulnerabilities and have automated reporting to be able to address any issues as they arise. An integrated workflow does this, as illustrated in the below workflow.

Common integration scenarios

The three Software Lifecycle Integration (SLI) patterns we’ll be looking at are Requirements Traceability, Security Vulnerabilities to Development, and the Consolidated Reporting Unification Pattern.

  • Requirements Traceability: the goal is to be able to trace each code change all the way back up to the original requirement. When it comes to Application Security, we want security requirements to be included in this traceability graph. To accomplish this we need to link requirements generated and managed by Security Requirements tools into the Project and Portfolio Management (PPM), Requirements Management, and/or Agile tools where we manage other requirements and user stories. We can currently do this with a Gateway integration in Tasktop Integration Hub, by adding a Gateway collection that accepts requirements from our Security Requirements tool and creates matching requirements or user stories in our PPM, Requirements Management, or Agile tool.
  • Security Vulnerabilities to Development: this is about automatically reporting security vulnerabilities to our development teams to quickly fix them. To accomplish this we need to link vulnerabilities reported by SAST and DAST tools into our Defects Management or Agile tools, where developers will see them and work on a fix. We can currently do this with a Gateway integration in Tasktop Integration Hub, by adding a Gateway collection that accepts vulnerabilities from SAST and DAST tools and creates matching defects in our Defects Management or Agile tool.
  • Consolidated Reporting Unification Pattern aims to consolidate development data from the various tools used by teams across an organization so that unified reports can be generated. When it comes to Application Security, we want data about security requirements and vulnerabilities included so that it can be reported on too. We need to collect these artifacts produced by our Application Security tools into our data warehouse. We can currently accomplish this with a Gateway Data integration in Tasktop Integration Hub, by creating a Gateway collection that accepts security requirements and vulnerabilities from our various Application Security tools and flows them into a common Data collection.

For further information on how Tasktop integrates your software value stream and enhances Application Security, visit our website and contact us today.

Key Lessons From A Big Software Product Launch

Thu, 02/16/2017 - 14:14

Last month was a seminal moment for us – we launched our next-generation software integration product, Tasktop. As ever, the product development journey was one hell of a ride.

Three years. 500,000 lines of code. 20,000 automated tests. 5,000 wiki pages. Hundreds of design sessions. Many mistakes. Some tears. A few moments of deep soul searching. And many days filled with tremendous pride watching a team pull together to deliver something special – something that we truly believe will transform the way people think about integration.

In true Agile style, I’m a big believer in retrospection, ascertaining key lessons and gleaning takeaways from the experience to improve the way we work. So what did we learn this time round?

It’s ALL about the people and trust.

To combine the powers of talented individuals and turn them into a true team, you need trust. All of our team will admit there were some rocky moments at the beginning and that’s only natural. Yet with hard work and perseverance, you can forge a close powerful unit that runs like a well-oiled machine.

Trust that the product manager and designers have fastidiously analyzed what the customers want and are requesting an accurate representation of their needs. And trust that architects and developers are designing a codebase and architecture that can be built on (while at the same time being nimble and lightweight as possible).

If I had a ‘magic button’ (everyone at Tasktop knows my obsession with magic buttons!), it would be the ‘trust’ button. Of course that is not possible – trust is built up over time and can’t be rushed – but once you’ve got it, man, is it an addiction!

It takes a village.

Building a pioneering software product isn’t all about the developers (although they’re obviously integral). To get the best result possible, you need:

  • Strong user-focused product managers
  • Imaginative and creative user experience designers
  • QA professionals that see the BIG picture (as well as thousands of details)
  • Technical writers willing to rethink documentation from the ground up

Throw in sales and marketing into mix and the village becomes more of a city by the end. Embrace it, take everyone in and watch your product development flourish in this environment.

Don’t give up and don’t give in.

Set a vision and DEMAND a relentless pursuit of that vision. When it seems like everything is being called into question, reach deep inside and stick to your core vision. It’s your constant, your north star.

Now, this doesn’t mean that you can’t alter and tweak things along the way – in fact, I would say if you don’t do a good amount of that you are heading for potential disaster. But if you don’t believe in the core vision that was set, then you will lose your way.

Have urgency but at the same time patience.

There is a somewhat elusive proper balance of patience and urgency. If I had another magic button I would use it for this purpose…but since I don’t, I think your best bet is to trust your gut to know when to push, and when to step back and let things steep.

Laugh a little. Or a lot.

I treasure the moments during the course of building Tasktop where we were laughing so hard that we cried. The thing I love is that I can’t even remember many of the funny moments that we shared – there were too many. And, yes, there were also a not insignificant number of moments where there was frustration and downright anger. But those memories aren’t what stick – what sticks are the moments where we overcame the hurdle, pulled together and laughed at ourselves.

Be grateful for those who support you.

Last but definitely not least, appreciate and thank the people that made the vision come to life. That doesn’t just include the direct teams that were involved, but also those who support you outside of work such as your friends and families.

The family that puts up with 28 trips to Vancouver in five years. The family that lives and breathes the ups and downs with you. The family that wants to see this product succeed almost more than you do!

To that end, I would like to thank my family; my husband, my son and my daughter – I thank all of you for putting up with the craziness of the last three years! If only the walls could talk… but instead, my 10 year old daughter decided to write down her own thoughts a few weeks before the launch:

“3,2,1…BLASTOFF!!!!!! This launch is all my mom has talked about (and the election) for the past 3 months. How much she has been talking about it shows that this launch must be really important. You should get the front page on the newspaper – which if you haven’t read since the internet came out I don’t blame you.

To be frank, I actually don’t know what the big product is supposed to be, but from past experience, Mommy’s team gets all upset when a product doesn’t work. Also, another benefit of getting this thing to work is that everybody will be super happy and joyful.

But I will say, whoever scheduled the timing of her big trip to Vancouver for the launch must not have realized that the big trip almost makes me not see my mom for two weeks because I am going to Hawaii (yes, my parents are that awesome they are letting me go to Hawaii for a week as a 10th birthday present).

But, of course, don’t let that stop you from making this Tasktop’s best product yet. Make the product, make it work, and make it the most awesome thing the world has ever seen.

“Tasktop, the most empowering force ever!” I can see it in those big letters on the front page. Yes, I am waiting for the day I see those exact words marching bravely across the front page of the newspaper. So, don’t just stand there, get up and show the amazing, futuristic, and wonderful world of Tasktop.”

– Bailey Hall, one Tasktop’s youngest and brightest thought leaders.

I’d like to thank everyone involved in making the launch of Tasktop a success as we move on the next significant stage in the product’s development – getting it to market and harnessing its many capabilities to drive our customers’ large-scale Agile and DevOps transformations.

For more info on the new product, check out our new site

Value Stream Integration

Tue, 02/14/2017 - 07:27

Every business is now a digital business – if you’re not then you’re vulnerable to disruption. Traditional business models, infrastructures and operations are in a flux as software continues to usurp and transform the status quo. Look at Airbnb and hotels, Uber and taxis, Netflix and films, Amazon and retail…you get the picture. The message is clear: keep up or be left behind. Or to paraphrase organizational theorist Geoffrey Moore: “Innovate or die!”.

Most CIOs are acutely aware of this state-of-play and are under pressure to optimize their organization’s software delivery process. Many are investing in new staff, tools and processes to drive Agile and DevOps initiatives and are often encouraged (and given false hope) by initial success – especially at a local level. Then they try to scale and become stuck. There’s too many tools, people and disciplines. The toolchain is fragmented and their transformations are failing.

Why does this happen? The problem is that best-of-breed Agile and DevOps tools don’t work together, creating friction in the way stakeholders interact. This causes manual work that increases cost, reduces velocity, frustrates team members, all the while making it difficult for management to have the visibility and traceability that they so desperately need to make key business-decisions.

Organizations continually adopt new tools to improve the individual disciplines they serve – test automation, requirements management, agile planning, DevOps and the like. By using these tools, stakeholders create work specifically for collaborating with their colleagues. But that collaboration is compromised precisely because each of these disciplines are using different, unintegrated tools.

Furthermore, managers want to see metrics and dashboards for real-time status reports so they optimize a process and/or ensure compliance. However, with a fragmented toolchain it is nearly impossible to obtain a holistic view. And everyone knows that the only way to improve a process is to look at it holistically.

What can be done? The key is to integrate the value stream.

Value Stream: Sequence of activities required to design, produce, and provide a specific good or service, and along which information, materials, and worth flows – Business Dictionary[1] 

When we talk about an integrated value stream in software delivery, we mean bringing together the tools across the software development lifecycle to radically improve team effectiveness and project visibility which allows you to:

  • Eliminate the wasted time, errors and rework by automating the flow of work across people, teams, companies and tools. This increases the team’s velocity and capacity, enhances collaboration and improves employee satisfaction
  • Enable real-time visibility into the state of application delivery by coalescing activity data across the value stream. This data can be used for management, optimization and governance reporting and dashboarding, as well as data for automated traceability reporting
  • Create a modular, responsive tool infrastructure that can itself react and adapt to new tool innovations and changing working practices, enabling tools to be plugged in and out on-the-fly

Until now, creating this sort of integrated software delivery value stream has been too hard. Companies adopted point-to-point and homegrown integrations that were costly, brittle and unable to scale. It was simply too difficult to automate the flow of information across a large-scale tool ecosystem, making value stream integration and visibility financially unviable. But now the game has changed.

Our model-based approach dramatically reduces the complexity of creating and managing integrations, and scales to dozens of tools, thousands of users and millions of artifacts. For the first time, integration and visibility across the entire software value stream are economically possible and is helping some of the world’s most successful organizations – including nearly 50% of the Fortune 100 – to thrive in a digital world.

  • Are you finding it difficult to give your managers the visibility into how things are going?
  • Are your colleagues complaining that they spend a lot of wasted time doing administration?
  • Is there a disconnect between your Agile methods and the need for governance and compliance?

If so, check out our videos of about our one-of-a-kind model-integration approach and speak to us today about integrating your value stream and to drive your Agile and DevOps transformations.

[1] Business Dictionary

Reimagining Software Integration

Thu, 02/09/2017 - 12:40

For too long, software lifecycle integration has been viewed as the red-headed stepchild at organizations – an unglamorous chore that is often considered a developer issue and a developer issue alone. That perception must change – it’s actually a critical organizational issue and this misconception is why Tasktop is leading the charge in rebranding integration.

The word ‘integration’ shouldn’t make your eyes glaze over, nor should it be last on the agenda when talking to management about how to succeed at scale. Integration should elicit intrigue and demand immediate attention – it’s THAT important. Why? Because integration is, in fact, precisely what will allow you to achieve organizational success at scale.

Actually, let me say that stronger; without integration, you won’t be able to scale. Wait…stronger…succeeding at scale is 100% dependent on integration. Integration is precisely how you will achieve your business goals, be it an agile transformation, DevOps initiative or improving your software delivery capabilities.

We’re reimagining integration to fundamentally change the way people think about how they connect software development tools and transform the way they deliver software. Let me show you how…

Imagine a world where… you can configure a sophisticated integration between two complex systems in under an hour. How? With a completely reimagined user experience that presents itself not with bits and bytes and XML configuration but instead in a visual, intuitive, logical way that aligns with what you already know and how you think about the tools you use. No coding required. This video further explains this benefit.

Imagine a world where… after you configure your first integration, you can scale to hundreds of projects instantly, thanks to the magic of models, which are the secret sauce behind being able to map once and then scale infinitely – as explained in this video.

Imagine a world where… The tool you are using to integrate has already codified so much about the end tools that integrations almost create themselves. Scary? Ok, maybe a little. But our smart mappings and auto-suggests of flows show the power of connectors that are domain aware.

Imagine a world where… all integrations work. All the time. With Tasktop, it is built in from the ground up. Nothing runs on Tasktop that hasn’t been through our ‘Integration Factory – a unique testing infrastructure that runs over 500k tests a day across 300+ connectors.

But those are all just features … and above I said that integration was a 100% dependency to be able to scale… so let’s talk about that a bit.

Scaling means two things: more people and more processes. And more people and more processes means more tools. But if those tools don’t operate as one, scaling quickly turns into a creaky machine with all kinds of manual handoffs, endless meetings and unhappy practitioners. So integration means getting all these tools, teams and disciplines to act as one.

But can you really get these various tools that aren’t designed to work together to ‘act as one’? The short answer is “Yes!”. The longer answer is “Yes – but only with Tasktop.” Only with Tasktop can you ensure your tool landscape consistently functions as a single, powerful entity, no matter how many systems you add to the tool stack.

We know the key ingredients that enable you to scale your toolchain so that you can drive organizational objectives and consistently deliver customer value. In fact, our ‘reimagined world of integration’ places strong emphasis on the software value stream and it seems we’re on to something big. Since June 2016, we’ve received extremely positive feedback to our new proposition from participants in our Early Access Program including:

Value stream integration is the next significant chapter in software delivery and that’s why we have launched Tasktop Integration Hub, our new product and pioneering approach to large-scale software integration.

If you missed our live-streamed launch event last week, you can watch the recording here. In the video, our CEO and founder Dr. Mik Kersten and myself introduce the product, while customers explain the product’s importance and how Tasktop is supporting their Agile and DevOps transformations at scale.

Let’s transform the technology landscape together and be part of history.

Eliminate the PMO Scavenger Hunt

Wed, 02/08/2017 - 11:28

The sheer multitude of projects that an organization undertakes every day puts enormous pressure on the Project Management Office (PMO). And considering that 97 percent of organizations believe project management is critical to business performance and organizational success[1], it’s paramount to ensure they have the best intel to do their job efficiently.

Project managers rely heavily on the PMO to keep them abreast of the latest information regarding their projects, as well as other projects that may have an impact on their work. They also look to the PMO to provide key insights on a product’s journey from concept to delivery, identifying bottlenecks ahead of time to ensure smooth sailing. However, providing such a holistic overview is a huge challenge, which may explain why 50 percent of all PMOs close within just three years[2].

One of the key factors behind a PMOs downfall is related to their access to vital data, which enables them to build the all-important real-time picture of the project portfolio. With regards to software development and delivery, they need end-to-end visibility and traceability throughout lifecycle so they can make key-decisions on influential matters such as resource capacity, labor headcount, project budgeting, IT strategy and so forth.

Traditionally they have acquired this information through a cumbersome, time-consuming scavenger hunt between teams and tools that often work in siloes. Without an intuitive system to gather this valuable information in one place, they’re forced to spend valuable time chasing down status reports, logging into specific tools, merging spreadsheets and involving themselves with other onerous manual work – precious time that could better spent elsewhere.

But it doesn’t have be that way – not with an integrated software lifecycle providing the visibility, traceability and valuable data that they desperately need to their job to the best of their abilities.

For further information, please download our guidelines to eliminating the PMO scavenger hunt.

You can also speak to our dedicated team who can best advise how to optimize your PMO.

[1] PwC, Global Project Management Report, 2012

TasktopLIVE: The Software Delivery Game is Changing

Mon, 02/06/2017 - 09:44

Last Tuesday, we unveiled our next-generation product Tasktop Integration Hub at our headquarters in Vancouver. During a live-stream event – TasktopLIVE – we set our new approach to software delivery and explained how we’re redefining the Agile and DevOps landscape.

CEO and founder, Dr Mik Kersten, kicked off proceedings by providing acute analysis of the software delivery landscape and the current potency of Agile and DevOps transformations: “Agile and DevOps have come of age – we’re seeing a lot of success at startup level, but huge struggles when organizations try to scale these transformations.”

Summarizing Tasktop’s evolution over the last ten years, Kersten talked through how the company has continually built solutions that optimize the whole software lifecycle through sophisticated integration and market-leading expertise. The latest service offering, he emphasized, is a natural response to the how software is evolving and how people work and use applications.

“All teams work in their best-of-breed tools to improve functionality in their specific roles, but these tools aren’t connected or communicating. The result is a fragmented value stream that lacks the visibility, traceability, governance and security required to continuously deliver business value.”

To address this, we have devised an entirely new approach to Agile and DevOps. Tasktop allows enterprises to define value stream artifacts and flows in a unified Integration Hub, ensuring that teams get the productivity benefits of each tool, while the business realizes immediate ROI from eliminating the waste of manual entry, automating end-to-end traceability and easily achieving end-to-end visibility.

Then Nicole Bryan, Tasktop’s Vice President of Product Management, explained how easily and simply Tasktop can integrate as many tools as required for seamless scalability (a process that is “also quite fun!”). Also speaking was a selection of customers, all of whom elaborated on how Tasktop has helped their DevOps and Agile transformations and why the new approach is so important.

Carmen DeArdo, Technical Director at Nationwide, explained how Tasktop boosts his job performance: “I have to figure out how to make things work better across our delivery value stream. Tasktop helps me to do that and enables us to build exciting applications.”

DeArdo also reiterated on how important visibility in the value stream is: “You can be a great driver and have a great car, but if it’s foggy and you can’t see the road, you’re going to slow down because you don’t trust what’s going on around you.”

Meanwhile Mark Wanish, former SVP, Consumer Sales & Marketing Technology Executive at Bank of America, has been involved in Agile transformations for over a decade and is a great advocate for Tasktop’s approach of focusing on the whole value stream: “You can make developers more Agile and improve their capabilities, but you can’t neglect elsewhere in the organization – for Agile to be a success, everyone needs to involved and delivering value.”

Also on the panel was Jacquelyn Smith, Senior Director, Platform Technologies at Comcast, who has recently began working with Tasktop following a big merger between Comcast and another engineering company: “Following the merger, we had an abundance of toolsets and instances thereof – we went from six tools to fifty! We wanted to scale products to serve our customers, but also be more sensible about how we move data between tools. We’ve just started working with Tasktop and they’re already helping us to support large-scale integrations and enabling us to work more simply, easier and faster.”

You can watch the whole recording of TasktopLIVE event here. For further information on Tasktop Integration Hub, please check out our brand-new website, which is jam-packed with new engaging content.

Interested in adopting our pioneering approach to software delivery? Contact us today and request a demo. 

Tasktop Integration Hub: Features and Models

Wed, 02/01/2017 - 09:30

At Tasktop, we’re very excited about our recent Tasktop Integration Hub launch. With this new product, we didn’t just set out to make incremental improvements. We set out to reimagine integration. Tasktop Integration Hub is one solution that handles pretty much all software delivery integration needs. It provides the right information to the right person in the right tool at the right time.

I set out to write about the features of this new product, but while writing, I had a few realizations…

First, reading a laundry list of features is boring. If you want to see the features along with some short videos, please visit our feature page.  You will find brief descriptions along with one-minute videos. These videos will do a much better job of ‘showing’ you the features, rather than me describing them.

Second, features are probably not what you care about. But you probably care that it works. That it’s powerful enough to support your organization. Tasktop just celebrated our 10th birthday. We’ve spent a decade listening to customers, and we’ve distilled thousands of hours of real-world customer feedback and use cases into a singular tool. And it does work.

During the past ten years, we’ve noticed that integration is often the last thing a company addresses in the software development process. Enterprises come to Tasktop after selecting tools and workflows. Sometimes, customers come to us after they try to handle integration on their own. This means we’ve had to be flexible in order to fit into almost any process. It also means we’ve seen a lot. It also made us work harder to provide the best integration tool on the market.

We understand that integration is about efficiency and ease of use. A good tool lets you do what you need to do. A great tool gets out of your way and lets you do what you want. A world-class tool helps you do things you never knew you wanted to do in the first place.

What our customers highlighted as critical. How we listened:

  • Connecting the tools they use.
    • We connect to over 45 tools using fully tested connectors. And we’ve added a new integration style that allows our customers to push events from a wide variety of tools.
  • Scaling existing integrations.
    • We understand how important it is to quickly add new projects to existing integrations. Tasktop has added Model-based integration management so it is simple to add a 2nd, 3rd, or even 100th project to the integration and our solution ‘understands’ what our customer is working to accomplish with the integration.
  • Flexibility.
    • Our customers must implement business rules about what goes where and when. Tasktop can filter and route artifacts as well as comply with customer needs around frequency (and direction) of specific field updates.
  • Security.
    • We provide secure log-in via our web-based interface.
  • Minimize server traffic.
    • We consistently hear from potential customers about their concerns around server overload caused by near real-time updates. Tasktop Integration Hub has implemented Smart Change Detection to limit the load on tools. It senses the changes to artifacts and maintains the smallest footprint.

I did write that I wasn’t going to focus on features, but there is one important new aspect of Tasktop Integration Hub that I would like to cover. Before I do, I wanted to mention that Tasktop Integration Hub includes the most used/important/popular features found in Tasktop Sync. It includes:

  • Artifact relationship: maintains artifact relationships across all your tools.
  • Person mapping across tools: you know who made a comment, even if they made the comment in another tool.
  • Comment synchronization: people can converse in their tool of choice instead of relying on emails that are never attached to the persistent artifact.
  • Attachment synchronization: to prevent duplicate login to separate tools and cut down on emails.
  • Routing: so that each artifact can be synchronized to the right place on the other side. To be honest, we’ve improved this enough to merit its own blog post.
  • And many more.

So now let me point out one of the things that makes Tasktop unique… and will make your integrations much more robust.

Introducing… Models

Models are Tasktop’s way of providing a universal translator for all tools. All tools speak different languages. Historically, integration tools rely on a 1-1 mapping between tools. That’s great if there are only two tools, but we’ve seen the pain that occurs when companies want to integrate three, five, six or more tools. The number of ‘translations’ between tool languages becomes untenable. With six tools, there are already 15 translations needed. Think about what this does to tool lock-in. Changing out one of these ‘languages’ for another requires five new translations. Models fix this.

Integrating Without Models

Integrating With Models

Models allow your organization to normalize the information flowing between tools.

You may be asking yourself “What is a Model?”

A Model is your abstract definition of a given artifact. It’s how an organization defines a specific ‘thing.’ For example, what defines a Defect in your organization? What are the common fields that are required to specify a Defect at your company? Not only that, but what are the values in those fields? For example, do you specify the Severity of your defects as Low, Medium or High? Or do you refer to them as Sev-1, Sev-2, Sev-3, Sev-4? Models let your organization decide how Defects should be ‘thought of’. The beauty of a Model is that the end tools don’t need to use the same field values. That’s part of the translation capability that Tasktop Integration Hub provides.

This may sound complicated, but it’s not. Tasktop comes preconfigured with eight models. Think of these as starter Models. Maybe you’ll need a new model. Maybe you’ll only need to tweak an existing Model. Tasktop Integration Hub provides that flexibility.

The beauty of Models is once one integration is created between two tools, the process of adding another project from each tool to the integration takes a matter of seconds. See the Scaling Integrations video.

If you’re still interested in learning more about what Tasktop Integration Hub looks like, how easy it is to use and how easy it is to scale, you can check out the Tasktop Integration Hub Demo. This 11 minute demo illustrates how simple it can be to set up and scale an entire integration scenario involving four separate tools.

As Carl Sagan said, “Any technology sufficiently advanced is indistinguishable from magic.”  Tasktop isn’t magic, but we sure want it to feel that way to our customers.

Tasktop Integration Hub is a world-class integration tool that will help you integrate tools in a way that could only be imagined before today.

Tasktop Integration Hub Launched, Value Stream Integration for Enterprise Agile & DevOps

Tue, 01/31/2017 - 04:50

Agile has won, and DevOps is now standard at startups worldwide.  With all of the success stories we are hearing at nearly every conference we attend, why is it that the conversations within our conference rooms continue to bring up a lack of clear business results, or outright deployment and adoption failures?

The success of lean practices for software delivery is critical to digital transformation and innovation, and the failure to execute on them opens the door to disruption. Yet organizations rooted in “waterfall” practices are thinking about scaling Agile and DevOps the wrong way.  In prior decades, the way to succeed with new methodologies involved betting on the right platform.  But in the world of Agile and DevOps, there is no one platform.  Instead, we are witnessing a large-scale disaggregation of the tool chain towards the best-of-breed solutions.  For large-scale Agile and DevOps transformations to succeed, we must shift our thinking from end-to-end platform to tool chain modularity.

Today I am thrilled to announce that after over three years of development, we are releasing a whole new approach to scaling Agile and DevOps.  The Tasktop Integration Hub completely re-imagines the integration layer of software delivery, and connects the end to end flow of value from business initiative to delighted customer and revenue results.  To do this we have created the new concept of Model-Based Integration, where we allow organizations to define their Value Stream Integration layer right within Tasktop, automating flow across people, processes and tools.  You can then map every best-of-breed tool into that value stream, easily scaling from a single Agile team to tens of thousands of happy and productive IT staff.  And you can continue connecting new tools as your tool chain evolves, giving you the power of modularity for the tool chain itself. Tasktop makes the tool chain Agile and adaptable to your business needs.

This release unifies our previous Tasktop Sync, Gateway and Data products into a single Value Stream Integration offering that easily scales to connect hundreds of projects, tens of thousands of users and millions of artifacts.  All with a beautiful and intuitive web UI that enables you to connect all of your tools without writing a single line of code thanks to Model-Based Integration.

Over the coming days we will be posting more detail about what we have done, how we have done it, and how it changes the landscape of enterprise Agile and DevOps.  For now, check out the following videos to get a quick overview of the product highlights and a whole new way to see the ROI of your transformation.

This release is the culmination of not only hundreds of people and years of development at Tasktop, but countless hours and effort from a dozen leading IT organizations who became a part of our Early Access program in April, and who have helped take the concepts from whiteboards and mock-ups to using them in production today. I encourage you watch some of their testimonials at our TasktopLIVE event and to join the conversation.

Product highlights include:

  • A world-first model-based paradigm for visually connecting dozens of tools across hundreds of projects without requiring any coding. For example, user stories, defects and support ticket models are defined in Tasktop, and then can be easily mapped across dozens of different projects and tools.
  • Support for applying different styles of integration across tools. For example, Agile and ITSM tools can be integrated for defect/ticket unification then easily connected to a database for instant Mean Time to Resolution (MTTR) metrics.
  • Easy scaling across hundreds of projects. By defining models that span projects and tools, new projects can be on-boarded easily and connected to the value stream.
  • All integrations work all of the time thanks to Tasktop’s unique Integration Factory. Multiple versions of the same tool can be connected, along with old versions of legacy tools and the frequently updated APIs of SaaS tools, without breaking because Tasktop tests all version combinations. Currently, Tasktop supports 51 tools and 364 versions.

For more see the Product Overview or Request a Demo.

APIs Are Not The Keys To The Integration Kingdom

Mon, 01/30/2017 - 10:53

Imagine a nirvana where software lifecycle integration just works. A place where an intricate ecosystem of best-in-class tools for software development and delivery runs seamlessly and its users benefit greatly from the steady flow of real-time information. Despite being a constant hub of activity, it’s also a place of calm – a Zen environment for everyone involved in the toolchain.

Every team – from testers to developers to PMOs to business analysts and PPMs – are in sync. Thanks to the end-to-end integrated workflow, everyone in the value chain has the visibility and traceability required to work on the project to the best of their abilities. Productivity is optimized and IT initiatives are driving their organization forward, helping them to consistently deliver high quality products and services to their customers.

At the heart of this nirvana are APIs. In this fantasy, APIs provide developers with all the essential information they need to make two endpoints connect. They possess this information because the vendors built their respective tools with integration in mind, ensuring to include detailed documentation to help external developers to feed the repository into their internal API.

If only this nirvana existed. The reality is integration is one of the hardest technical processes that an organization can face. It’s an all-encompassing job and APIs have a starring role that significantly influences the outcome.

Now, using a tool’s APIs is the best and most stable way to access the information stored in the tool’s underlying database. APIs facilitate access to the endpoint’s capabilities and the artifacts that they manage, and they can also enforce business logic that prevents third parties from unauthorized activities.

However, while APIs are a critical piece of the integration puzzle, they also highlight the delicate intricacies involved in the integration process. Many of these APIs were actually created for the vendor’s convenience in building a tiered architecture, not for third party integration. They were not made with a consumer in mind; an afterthought if you will.

As a result, these APIs are often poorly documented and incomplete:

  • Model objects don’t necessarily work correctly together
  • Data structures, how and when calls can be made and the side effects of operations are all often excluded from the documentation
  • Poor error handling and error messages are common
  • Edge cases and bugs are rarely documented
  • Some APIs aren’t fully tested e.g. some tools may return success even when all charges aren’t made
  • Some APIs have unexpected side effects/behavior e.g. updates that result in delays for changes to appear
  • Some APIs have inconsistencies between versions e.g. different vendor endpoints to retrieve tags
  • Because they’re not documented, figuring out how to handle these issues requires a great deal of trial and error. And sadly, often the vendor’s customer support staff is unaware of many of these issues and how to use their API, so finding resolution often requires access to the endpoint vendor’s development teams

So what does this all mean exactly? Consider a kitchen for a second; the pantry is full of ingredients (APIs) to make a recipe (the formula for the integration), but without correct labelling (documentation of the APIs), we have no idea of what they are, their expiry date, how best to use them etc. Any attempt at cooking an integration will likely end in disaster.

What’s worse, these APIs can change as the endpoint vendors upgrade their tools. Depending on how thoroughly the vendor tests, documents and notifies users of API changes, these changes can break the carefully crafted integrations. For SaaS and on-demand applications, these upgrades happen frequently and sometimes fairly silently.

So any API-based connection is little more than just glue holding together two systems – a temporary and unreliable measure. There’s no maintenance or intelligence built into the tool to ensure the systems are continuously working together. In a software world that faces a relentless barrage of planned and unplanned technical changes and issues, such a brittle integration is unacceptable. Your software develop team will suffer, as will your overheads and the value you deliver.

With that in mind, we need to find a way to label the APIs and gain a better of understanding of how to use them collectively to create first class integrations. The first step is always to do an exhaustive technical analysis of the tool:

  • How is the tool used in practice?
  • What are the object models that represent the artifacts, projects, users and workflows inherent in the tool?
  • What are the standard artifacts and attributes, and how do we (quickly and easily) handle continual customizations such as additions and changes to the standard objects?
  • How do we link artifacts, create children and track time?
  • Are there restrictions on the transitions of the status of an artifact?
  • How do we use the APIs?

This analysis can be very time-consuming, especially when you factor in poor documentation and design flaws (in the context of integration). And what at first appear to be pretty simple tasks actually turn out to be surprisingly hard. For instance, ServiceNow has 26 individual permissions to go through – no quick or easy endeavor. The results of any analysis should reveal the knowledge discrepancies and highlight how the lack of information hampers the possibility/quality of the integration.

By now, you probably have a fair idea that using APIs to create an integration takes a herculean amount of effort behind the scenes. And trust us, that’s only the tip of the iceberg. We’ve spent over a decade building up an encyclopedic understanding of the SLI and the education never stops.

Fortunately, we’re fully equipped with the right brains, technology and processes to stay at the vanguard of the market, using domain expertise and semantic understanding to create robust large-scale integrations that grow with your software landscape.

For more information, please:
Speak to our dedicated team
Visit our product pages
Download our eBook on ‘Why Integration Is Hard’

We will also be discussing in detail the huge challenges involved in software lifecycle integration tomorrow (Tuesday, January 31st) during our special live streamed event, TasktopLIVE. You can find out more about the event here.

Why Do Software Lifecycle Integration Projects Fail?

Fri, 01/27/2017 - 13:16

Most software lifecycle integration (SLI) projects fail because organizations underestimate just how difficult integration is and are unaware that software development and delivery tools are not designed to integrate.

Endpoint tools were built by their vendors to create a tiered architecture and not necessarily for third party integration. The tools are built for a specific function e.g. JIRA for Agile Project Management, HPE ALM for Agile Lifecycle Management, CA PPM (Clarity) for Product and Portfolio Management and so on. They’re best-of-breed tools built to optimize their user’s capabilities in their respective roles.

By looking at connecting ‘just’ two tools, you quickly see how technical clashes between them create a litany of complications that undermine the ability for the two tools to communicate – despite this function being the bare minimum requirement of any integration.

You don’t want to just mirror one artifact in one tool in another – you want that artifact to be understood across the value chain so that all teams and tools understand the context of what they’re working with, and towards, for optimized collaboration. To do this, we must ascertain:

  • How each individual tool is used and by whom
  • What are the object models that represent the artifacts, projects, users, workflow etc.
  • How we handle continual customizations such as changes and additions
  • Clarify any restrictions, users behaviour, needs etc.
  • Ascertain any future expansion/scalability objectives

Each endpoint has its own model of ‘like objects’ (such as a defect) and in theory they have the same type of data. But each tool stores and labels this data differently, and can be modified with custom attributes and permissions and with different formats and values.

For instance, the defect tracking tool may have three priority levels (1, 2, 3) but the agile planning tool may have four (critical, major, minor, trivial). They have the same understanding of the artifact, but possess no means to accurately communicate to each other. They need a multi-lingual translator.

These influences mean any synchronization between artifacts must occur between widely divergent data models, which in a way creates a ‘data shoehorning’ of sorts. You’re trying to align two concepts that don’t naturally match and this will create conflict. Or what we call ‘impedance mismatch’.

Impedance mismatch occurs because of the different language being used and the relationships that artifacts have with other objects. These relationships must be understood to provide context. They do not live independently of one another. Each story is relevant to an epic in a tool and there’s many chapters within that story that must be understood and communicated for tools to interoperate. We call this ‘semantic understanding’.

In seeking this information, it’s only natural to consult the tool’s APIs. And it is at this junction that we discover our first real hurdle to integration. APIs rarely provide this ‘integration essential’ information because they’re not documented for such a process – as touched on earlier, they’re created for a specific purpose by the vendor.

If there is any documentation, then it’s often vague and/or incomplete. And of course, all tools are subject to sudden upgrades and changes – especially given the rise of on-demand and SaaS applications – which will instantly undermine any integration. You can read more about why APIs are a double-edged sword for integration in our blog ‘APIs are not the keys to the integration kingdom’ next week.

Furthermore, connecting two end points is only the start. The real challenge is when you want to connect a third, fourth or fifth connection, which is what you want to be aiming to do for effective large-scale integration. It would be only natural to assume that once the ‘hard work’ of the first connection has been done, any additional integration would be a simple and iterative process. Sadly, this is not the case – there is no one proven formula. The complexity only increases:

While the learning curve isn’t as steep as the first integration, the curve doesn’t flatten as one would hope. Some of the issues that reared their ugly heads in the first integration will return. Once again you’ll have to run the technical analysis of the tool, establish how artifacts are represented, identify the similarities and reconcile the differences with other tools in your software workflow. The API information will once again be little help and there will be more unforeseen circumstances.

So how do you safeguard your software development ecosystem with a robust, steadfast integration?

The key is a solution that understands the complex science of large-scale software development integration and possesses the ‘next level’ information that APIs don’t provide. A model-based solution that provides the multi-lingual translator to ensure that all endpoints, regardless of number, can communicate with each other. If you’re investigating a solution you need to make sure it includes the following:

  • Semantic understanding
  • Domain expertise
  • Neutral relationships with endpoint vendors that allows for deep understanding of the tools
  • Testing infrastructure that ensures integrations are always working

For more information, please:
Download our eBook on ‘Why Integration Is Hard’
Speak to our dedicated team
Visit our product pages

We will also be discussing the huge challenges involved in software lifecycle integration on Tuesday, 31st January – during our special live streamed event, TasktopLIVE. You can find out more about the event here.

Bringing ITSM and DevOps together

Thu, 01/26/2017 - 13:27

Sometimes a new year brings a new way of thinking. When it comes to software integration, it’s time to stop focusing on connecting specific tools and start focusing on enabling collaboration, reporting, and traceability for all of the domains or silos in your organization. Connecting specific tools is a technical detail, but connecting silos is what drives real value for an organization. In this blog series, members of the Tasktop Solutions team will review several different domains of software development and point out how improvements can be made using integration.

IT Service Management (ITSM) is one such domain. It encompasses customers, services, quality, business needs, and cost. The goal of ITSM is to enable IT to manage all of these holistically. This helps optimize the consumption and delivery of the services provided by the IT organization. Many people view ITSM as the service desk, but it’s not just about tickets and support. ITSM relates to the overall management of the IT organization. Service desk is just one small piece. ITSM is typically operated within the IT team, applying one of the many frameworks that can help ensure success. ITIL (IT Infrastructure Library) is one of the most common frameworks, but there are others (like COBIT, ISO 20000, and SIAM)– all used for very specific purposes.

There are also many different ITSM and Service Desk tools available today. They can be generic or focused on one of the frameworks. A few examples are:

  • ServiceNow ServiceDesk
  • BMC Remedy
  • Cherwell Service Management
  • HPE Service Manager
  • Salesforce Service Cloud
  • Zendesk
  • Freshdesk
  • Atlassian JIRA Service Desk

The ITIL framework used in ITSM provides a library of processes that utilize a variety of functions (service desk is one) to help ensure that the design, implementation, management, and support of an organization’s IT services are developed and delivered optimally and in a controlled manner. Most organizations utilize only a few ITIL processes. Typically, they include:

Using these three processes, organizations can speed up delivery and guarantee that high-quality services are provided to customers. These processes also help ensure that issues are handled properly, categorized, and rolled out in a controlled manner.

The increased push to bring DevOps into ISTM has also created a pressing need for integration. Because integration helps the organization manage things closely even when a variety of teams are using a variety of tools (e.g. Agile tools like Atlassian JIRA or LeanKit). It also enables the organization to maintain traceability and ensure that quality services are being provided.

And integration is not just about connecting the tools. It’s also about connecting the teams involved in the work that is being tracked in these tools. ITSM is a holistic process that can touch all aspects of the software development process from support and IT professionals to developers and product managers. When looking to integrate with an ITSM tool in a DevOps world, the three main processes (incident, problem, and change management) are very complementary to the ways integration works best. Commonly, development teams require tight interaction with the IT organization in order to handle common patterns such as Help Desk Incident Escalation, Help Desk Problem Escalation, Help Desk Feature Request, and Known Defect Status Reporting to Help Desk.

To put this all together, incidents and problems originating in the ITSM tool can be escalated to the development team as a defect for resolution and to the testing team for verification. Once that defect is fixed, the development team can use their tool of choice to open a new change request, which will automatically be created in the ITSM tool, to deploy the fix to production. This integration results in seamless collaboration between the teams, within their tool of choice, while ensuring that traceability is maintained between these systems and the originating records.

Once all tools and teams are integrated as a part of the ITSM process, the delivery of changes is faster, more automated, and there is an enhanced level of traceability—so the organization knows what was required to repair a problem or complete a change request. This results in increased effectiveness and efficiency when it comes to the process and the product being delivered.

As companies grow, there is an increased need to look at “supply chain integration.” This is typically due to an increase in outsourcing IT services and a need for different organizations to work together. Integrating ITSM tools between 3rd parties can be a great way to ensure that information is transferred quickly between the systems and without error. This allows companies to work together seamlessly.

Why is Software Lifecycle Integration So Damn Hard?

Tue, 01/24/2017 - 13:55

Teams within the software development and delivery lifecycle are increasingly working in their best-of-breed tools to enhance their capabilities to build powerful, cutting-edge software that helps their organizations to innovate and thrive in a digital world.

But while this progression is great for individual teams, the benefits don’t necessarily extend across the lifecycle because these tools are not designed to connect.

Consequently, teams and disciplines end up working in distinct siloes, and as more tools are introduced to the software delivery lifecycle, the tool stack becomes fragmented. The quality and speed of software delivery suffers as a result.

This disconnect means poor visibility and traceability across the workflow, which:

  • Undermines governance
  • Compromises compliance and security
  • Slows productivity
  • Increases overheads
  • Decreases business value

That’s why many organizations are integrating their software lifecycle. But identifying this issue is just the start – the reality is that integrating Agile and DevOps tools is actually really, really hard.

To help, we have developed an eBook that delves into why integration is so hard, provides key insights into best practice and explains how you can begin your successful integration journey without ever looking back.

In the eBook, we discuss:

  • Why integration is hard
  • How software lifecycle integration is more than a technical issue – it’s a business problem
  • The complications in connecting ‘just’ two endpoint tools
  • The misleading power of APIs
  • The need for semantic understanding and domain expertise
  • The difficulties in scaling beyond two integrated tools
  • The best way to approach large scale integration

You can download the eBook here.

And of course, please don’t hesitate in contacting us to discuss any software development lifecycle issues you may have.

We will also be discussing in detail the huge challenges involved in software lifecycle integration on Tuesday, January 31st during our special live streamed event, TasktopLIVE. You can find out more about the event here.

The Co-op Experience at Tasktop

Thu, 01/05/2017 - 08:07

This post aims to provide a glimpse into my role as a Junior Software Engineer co-op, in the hopes of informing prospective co-op students about what they can expect if they decide to embark on an internship at Tasktop. Clearly, my experience is unique and my own, as any software engineer’s experience at any company is highly dependent on the technical challenges facing their team, their team members, and the employee’s own technical background and role within the team. So I will focus on aspects of my experience that are most likely to be consistent with the experiences of my fellow co-ops—such as company culture, the responsibility co-ops are entrusted with, and the outstanding mentorship—and avoid the specific technical challenges I faced.

To provide context, I am a third-year student at the University of British Columbia, studying towards an Honours degree in Computer Science. Prior to coming to Tasktop in September 2016, I have held two internships at a time-tracking software company called Replicon.

My favorite aspect of working at Tasktop is the highly social working environment. Cubicles do not exist at Tasktop. Engineers work in the same large, open room with floor to ceiling windows that look out onto Stanley Park and Robson Street. Everyone eats lunch together, and there is a weekly happy hour, where you can mingle with co-workers that you don’t interact with during a normal working day. Combine the office layout and regular weekly socials with the fact that everyone is very friendly and enjoyable to spend time with, and you have a phenomenal social environment.

Another important reason why Tasktop is an awesome place to intern is the relative lack of distinction between co-ops and full-time engineers. You will begin by working on lower-priority tasks that allow you to become acquainted with the code-base, as it would be for any new employee. From this point, you progress into more complex tasks and, based upon the technical challenges facing your team, you will be able to choose tasks that allow you to develop your technical capabilities in the areas that interest you. The key point is that you will always be writing production-level code. Co-ops are fully integrated into the teams, taking part in daily stand-ups, sprint retrospectives, and sprint planning events. And, co-ops are strongly encouraged to get involved in the code-review process. At Tasktop, co-ops are allowed to take on a great deal of responsibility, and treated as trusted team members, which has been crucial to my growth as a software engineer.

Most importantly, the mentorship you receive at Tasktop is top-notch. Each co-op is assigned a different mentor, so each co-op’s experience varies, but every mentor is very knowledgeable, open to questions, and willing to provide guidance. That means every co-op has a strong mentor to lean on. In my opinion, mentorship is most important part of any co-op’s experience, as having that role model makes it easier to learn how to become a strong software engineer.

Some other great perks of working at Tasktop include:

  • Flexible working hours.
  • Free bananas and beverages.
  • Being situated near Robson Street means that there are plenty of awesome places to grab lunch.

All in all, working at Tasktop has been a phenomenal experience. I’ve met amazing people and I’ve developed substantially as a software engineer.

Tasktop’s 2016 Year-in-Review and 2017 Predictions

Thu, 12/29/2016 - 08:29

2016 was an exceptional year for Tasktop. We increased revenues in fiscal year 2016 by 75 percent, expanded our global footprint by growing our partner and customer ecosystems, and improved our product offerings. As a result, we grew to over 100 employees across Canada, Germany, Poland, and the U.S. to support the growing Tasktop community.

Tasktop released numerous updated product offerings including our Gateway capability. This capability allows organizations to automate the connection between their DevOps automation tools and their lifecycle management tools, providing enterprise organizations with an end-to-end DevOps Integration Hub.

In addition to updates for new versions of supported third-party tools, we also added new integrations for Agile and project management, and visual modeling for requirements management. This means that while our customers’ toolchains are changing or being upgrading they can have confidence that their tools will remain integrated with over 300 versions of endpoint products supported.

Thousands of people signed up for our webinar series, where we talked about a range of issues impacting large software development and delivery teams. In case you missed them here’s some of our most popular events:

While 2016 was admittedly a strange year, 2016 was one for the books at Tasktop, and we anticipate 2017 being even better thanks to our outstanding team of partners and customers. To give you an idea of what we’re thinking about in the year ahead here’s a quote from our VP of Industry Strategy, Betty Zakheim, recently featured in DevOps Digest:

“Organizations have started to realize the benefits of their Agile and DevOps transformations, but these benefits have largely been local optimizations. Agile development teams have become more responsive and adaptive in the way they deliver “done increments” from their backlog, but often still struggle to extend their collaboration beyond their scrum teams. And DevOps initiatives have done outstanding work in using automation to create an environment that enables continuous delivery. But the dream of unifying these initiatives into a single software development and delivery value stream has largely eluded the vast majority of organizations. In 2017, this will start to change as CIOs increase their demands for visibility into the business value that their delivery teams create and the tools that enable a unified value stream become easier to use.”

This idea of visibility across the value stream is something that some of our most visionary customers have already started to explore. You can learn about Nationwide’s journey in their recent webinar: How Nationwide and Tasktop Achieved Continuous Visibility Across the DevOps Lifecycle. These are just some of the exciting developments that we’re looking forward to exploring more in 2017.

Happy New Year!

Role Model Ladders: A Concrete Path to Getting More Women in Technology

Thu, 12/15/2016 - 07:44

This was a week of extremes for me. Seven customer visits in a whirlwind trip to Europe. It was exhilarating, as every one of them was impressed with how Tasktop is innovating. But there was something missing. Women. There was not a single woman in any of the meetings I attended. Disheartening. Then, my 10-year old daughter chose a woman on my team as her role model to write about for her school project. Back to exhilarated. And now, on the airplane for the long trip home, reflecting on this roller coaster of emotions, I just realized something that can help girls and women, especially women in technology.   We need “role model ladders”. And you can help. Let me explain.

What is a role model ladder? As Albert Schweitzer once said “Example is leadership.” Basically, people need their role models to be attainable examples of what they can be. That means role models need to be similar enough, or close enough in age, to help someone imagine the path that lets them “be” like that role model. Sure, heroes are great, but our role models need to be closer to who we are. For example, my 10-year old daughter needs to be able to look up to someone who is just starting out in a career – because she can imagine that. And that person, the person who is just starting out in her career, needs to have someone to model who has say, 10-15 years of experience. And that woman in turn needs to see a female in a significant management position. Each rung in the ladder is quite important – and if you are missing a rung in your organization, it severely limits the likelihood of creating a thriving female cohort in your organization.

So how can we create these ladders in the technology industry? Here’s how Tasktop is doing it. One of our three founders, our Chief Science Officer, is female. She, very early on as the company began growing proactively, talked about and reminded Mik, our CEO, that in order to foster a great and collaborative workplace in tech you need to actively recruit and retain women. She knew that in technology women don’t come knocking on your door. You have to find them. Mik took this to heart and he found, well, me ;). I didn’t find Tasktop, Tasktop found me. Then it was my turn. As my team began to grow, I had hundreds of resumes cross my desk…but no women. So, I contacted a nearby university and found, you guessed it, a female professor in the information systems department. She actively reached out to talented women in her program and encouraged them to apply. And that is how we hired the woman my 10 year old daughter has chosen as her role model for her school project. Now that is quite a ladder! And, our Senior Director of Engineering took a look at his management team, and recognizing that they were all men, consciously sought out a talented female engineering manager. He just built what is likely one of the hardest rungs in the ladder–because women engineers have a strong tendency to move out of engineering entirely as they progress in their careers. But now all of the co-ops in our engineering group see a clear path. And that will undoubtedly make a difference for our company.

This week, while on these customer visits, I did notice there were some women in the development bullpens. But if all they see is men attending the “important meetings,” the ladder will be broken. You can change that in your company. There is really no magic to it. Simply look at the women in your company’s organizational chart to see where you are missing rungs in your ladder. Then focus on those areas. Be specific. Cultivate a woman to fill the middle management role in IT, or a senior engineering role. And make sure your culture and environment are inclusive, so that when you expend this energy and find a great woman, their contributions are welcome and they will stay and grow with your organization. It will take effort and you may need to get creative about how you find and cultivate talented women – and creating an inviting culture where women want to stay also requires creativity and perseverance. But it is worth it.

It is only through small but intentional steps that we can change things. Tasktop is doing it. Your company can too. And, I guarantee that if your daughter comes home and says that she is writing about someone in your ladder, you’ll feel exhilarated and hopeful about the future.

*Originally published on Code Like a Girl.

Modularity – The Next Major Milestone in Creating an Infrastructure for Innovation

Tue, 12/13/2016 - 14:32

With all the drum beating that takes place at tech conferences, one could be fooled into thinking that adopting Agile and DevOps principles is a surefire way of achieving innovative software development. But for every success story, and there are many, there is always an expensive and painful tale of failure.

Last year our CEO and founder, Mik Kersten, visited over 200 organizations and found many of them were wasting big IT budgets on failed software delivery transformations. These are organizations seeking to innovate their business models to respond to digital disruption in their respective markets. This trend is understandable – you only have to look at how Tesla has changed the automotive industry to see how software is dramatically altering the dynamics of manufacturing.

Many of these enterprises were attempting to imitate the software delivery methods of prosperous digital start-ups such as Netflix, Facebook, LinkedIn et al, without realizing just how different their business models are. These organizations, such as financial and insurance institutions, can have 10x, sometimes 100x more developers than those popular digital-native brands. And to make things worse, these developers are often working within separate tools and methodologies to deliver software projects.

These fragmentations across the toolchain create roadblocks to scaled Agile or DevOps transformations, which hinders the ability to create a value stream for better software-driven products and services. This makes it increasingly difficult for these organizations to compete with smaller digital enterprises with more compact teams that work more collaboratively and proactively.

The main problem is that there is something fundamentally broken when it comes to scaling software development infrastructures. No matter how many DevOps conferences your staff attend, these siloes between teams are not going to automatically disappear. The structures in place are just too big and too embedded within the organization’s processes and working culture. To reach an agile/DevOps utopia, there has to be a seismic overhaul of how tools are linked and how the relevant personnel work and think.

What do I mean by a seismic overhaul? Well, for organizations to reinvent themselves they need to create a modular toolchain that enables innovation. They need to move away from trying to create a single tightly-coupled toolchain via building point to point integrations, or by seeking out a vendor with a one-platform offering. Such a platform is not compatible with the fast-paced modern IT climate.

What is compatible is a modular best-of-breed toolchain whereby components can be taken out or added in response to evolving requirements i.e. the toolchain evolves as the organization does. However, this is no easy feat.

To help these organizations, Mik Kersten will be hosting a webinar this Wednesday (14th December) to expand on why modularity is the next major chapter in the integrated software development narrative, where he will cover:

  • Real-life examples of large-scale Agile and DevOps transformations that are failing
  • How to make the toolchain modular to support change and option value
  • How to create value stream visibility and identify the bottlenecks to focus IT investment
  • Customer examples of how to create an infrastructure for innovation

Can’t make the live webinar? Everyone who registers will get access to the recorded presentation, so make sure to sign up so you can watch on demand.

When: Wednesday 14th December
Time: 10am Pacific, 1pm Eastern, 18.00 GMT
Register today!

Merging nested Lists or Arrays with Java 8

Tue, 11/15/2016 - 09:48

When accessing 3rd party APIs with pagination, we tend to see the same pattern over and over again. Usually, a response (represented as POJO) looks something like this:

class Result { public List<Item> getItems() { ... } }

Be it from a third party service or your own APIs, after retrieving all results of the API call, we end up with something like a List<Result>. Great. We don’t really care about the response itself, we’re actually interested in all the items of the responses. Let’s say we want all the items, filter some of them out and transform them into a TransformedItem. People usually start writing something like the following:

List<Result> results = ... List<TransformedItem> newItems = .map(result -> result.getItems()) .filter(item -> item.isValid()) .map(item -> new TransformedItem(item)) .collect(toList());

Ooops, this doesn’t even compile. The problem is that the first map doesn’t return a Stream<Item> but actually a Stream<List<Item>>. In order to actually merge/flatten all those nested lists, you can use Stream#flatMap. The difference is quite simple. #map allows you to transform an element in the stream into another element. On the other hand, #flatMap allows you to convert a single element to many (or no) elements.

List<Result> results = ... List<TransformedItem> newItems = .map(result -> result.getItems()) .flatMap(List::stream) .map(item -> new TransformedItem(item)) .collect(toList());

Just in case you’re working with a 3rd party API that returns something ugly as List<Item[]>, you can use the same pattern, just choose the corresponding the flatMap function.

class QueryResponse { public Item[] getItems() { ... } } ... List<TransformedItem> newItems = .map(result -> result.getItems()) .flatMap(Arrays::stream) .map(item -> new TransformedItem(item)) .collect(toList());

Have fun with #flatMap and let me know in the comments about how you used #flatMap in your scenarios. A great explanation of how to compose streams and some concepts behind streams can be found in Martin Fowler Collection Pipeline.

No Project Too Large

Tue, 11/08/2016 - 06:21

Imagine a world with no capital projects. We’d have no high-speed trains, no nuclear waste facilities, and no cruise ships. At a recent IBM IoT Watson seminar on Systems Engineering in Capital Projects, we learned about the latest and greatest projects; including the high-impact Crossrail project that is to add 10% to the London commute capacity, presented by Chris Binns. Many wouldn’t think of all the dependencies (or coupling, as referred to in Systems Engineering) that the new high-speed railway have on existing Network Rail stations. An example is the lifts that need to be upgraded throughout existing stations to cater for accessibility requirements that were set out by the Crossrail project.

Increased risk & cost escalation
But these expensive and high-risk initiatives are not the sole property of high-speed train projects. Paul Fechtelkotter from IBM showed us the many forms of capital projects and systems engineering. A number of industries, such as oil and gas, suffer from cost escalations, with the number of engineering hours per asset skyrocketing, and productivity falling.

Typically, capital projects have a global impact, and as such, increased risk. Increased risk is also due to often having to project hundreds of years (or hundreds of thousands of years in the case of nuclear waste facilities) into the future.

On IoT
As we were at an IBM Watson IoT seminar, it wasn’t surprising that the impact and the many uses of IoT were explored. In particular, IoT allows us to monitor the systems and assets throughout their lifecycle. Whilst sensor-based intelligence often comes with challenges when adopted across the sites with geographical variations (such as temperature), they help futureproof the system especially in industries with many regulations. It should be noted, though, that IoT is not limited to “devices”, but concerns technology and processes used in a wider sense in enabling “smarter living”.

Know your requirements, and know them early
Towards the end of the seminar we had an open discussion, and one of the questions was how to know our requirements 5 or 10 years before we even start a project; for instance, when designing and building a new office? How could we know, at present, the technologies that we could exploit in five or 10 years’ time?

The consensus was that to be able to assess the “unknown unknowns”, we need to perform good problem analysis. And, what was re-iterated throughout the day, good problem analysis is based on sound requirements management.

Further, it is important to catch errors in the requirements early: as the impact from errors increases with time (think of a ripple effect); but on the flip side, any rewards that can be realized earlier in the process will amplify with time. An example of this could be sensors and computing resources for security purposes; which, when deployed early and in the right place, will bring multiplied benefits throughout the asset’s lifecycle.

Document management isn’t requirements management
Another “gotcha” from the day was a reminder that document management is not the same as managing requirements. On average you miss half the requirements or change requests when extracting from a document. One of the reasons for the human errors are around understanding the coupling of requirements, as well as errors in understanding the impact of the proposed changes. Thus, when using documents for managing requirements, the true cost and impact of the requirements are lost, along with traceability.

Paul and the other speakers were very good at demonstrating how requirements management toolsets can be used as part of the “V & V model” (verification and validation model) that is championed by many in systems engineering.

Learning from each other
A seminar that brings a variety of professionals into the same room has the major benefit of learning from each other by sharing past successes. As part of his presentation, Paul shared his views on how we may not only look at, and learn from, successes within our own industry, but also from industries with similarities.

But how to pick the right industries to look at? This exercise becomes much simpler when you separate problem from solution. What is the system culture and what tools and processes are in use? To put it simply, focus on and look for similarities in coupling and complexity of the industry; as demonstrated by NASA (presented by Paul) in the image below.

System engineering principles are predominantly in use by industries in top right (space mission, military early warning). The top left quadrant includes candidates for system engineering, including marine transport, and rail transport.Summary: Separate the problem from the solution
The one take-home message from the day, and which I’d like to repeat once more, is to separate the problem from the solution and you’re half way there. And, of course, based on the discussions it seems like a good idea to use a requirements management system to use for this purpose. But if I were to suggest that, I would couple the solution with the problem, which is quite the opposite of what I’d like to do.