Subscribe to Blog feed Blog
Connecting the world of software delivery.
Updated: 1 hour 34 sec ago

The linchpin of large-scale software development? Communication.

7 hours 40 min ago

I fear the day that technology will surpass our human interaction.” – Anon

Twelve months ago, I walked into Tasktop as a software rookie. I knew what software was, of course, and I knew of its importance in a digital world. But that was pretty much the extent of my knowledge. Much like the internet, free healthcare and air travel, I was aware that software greatly enhanced my life, but I had never really thought about the ‘how’.

Those days are now long gone. Well, at least in terms of my ignorance to software development. I still assume that the internet is a natural gas, doctors and nurses work purely out of the goodness of their heart, and I still don’t want to put too much thought into how planes – i.e. GIANT PIECES OF METAL – stay in the air. It’s not good for the ol’ ticker, especially as I fly over to England from Canada to see my mum from time to time.

That said, everything that you think software development entails is probably true. Super smart and creative people, lines and lines and lines of code, lots of gadgets and programs, intricate processes and alien languages…it’s all there. Oh, and acronyms. Lots. I often feel like I’m trying to decipher a game of drunken scrabble.

In short, a lot of complex stuff goes on behind your bank account, favorite game, app and website. And that stuff is perpetually changing – as constant as the movement of time – a restless beast that’s always seeking to improve and evolve. But none of this was really a surprise – I’ve seen Tron.

No, the biggest realization was that while enterprise-level software development is super advanced technical science in terms of tooling and programming, it’s somewhat backwards in terms of communication. And this aspect is what often holds back an organization’s capability to continuously produce high-value software. Or to be more specific, restrains the key people involved in driving and optimizing the process.

If you asked industry ‘outsiders’, or even some ‘insiders’, what the fulcrum of software development was, they’re likely to say ‘technology’. Makes sense. Software is created by technology. It’s maintained by technology. It’s enhanced by technology. Hell, it is technology. But the one thing technology cannot do is develop software without the human touch.

By the same token, humans can’t develop software – good software, anyway – without nifty technology; they need effective ways to plan, build and continuously deliver software. But the relationship between practitioners and technology goes even deeper than that. They need each other. They’re beholden, symbiotic, practically one and the same, bound together by the same goal. Except for one significant distinction: technology is not a sentient being (sorry Siri, Watson et al.)

While technology undoubtedly makes our working and personal lives better in so many ways, we all know it’s a double-edged sword. We’ve all considered defenestration, imagined the relief of sitting back and watching our crash-prone laptop burst through the office window and plummeting to the street below. Our computerized buddies, on the other hand, have only ever wanted to help us.

And this ill-feeling, this frustration, is much worse in software development because it’s silent. It’s a quiet, lingering pain, like a nascent cavity in a tooth. And finding, identifying and addressing that invisible discomfort is critical if you’re to make a success of large-scale software delivery. Because the best-of-breed tools can only take us so far – we are the brains. And we must communicate with other humans brains if we’re to harness technology to its full potential.

I am reminded of the below quote (author unknown; it’s been attributed to Albert Einstein and Leo Cherne, among others):

“The computer is incredibly fast, accurate, and stupid. Man is incredibly slow, inaccurate, and brilliant. The marriage of the two is a force beyond calculation.

There’s loads of great tools for all stages in the software delivery process; project management tools have all the functionality to manage projects; Agile Development tools have all the tricks for planning and executing development; testing tools are damn fine at running code through its paces etc. But they all need to work together to maximize their value, communicating and sharing information so their users (and their brilliant brains) can work as efficiently and effectively as possible on the same projects in real-time.

But they don’t. These tools are built for purpose, and not necessarily designed to integrate with each other. Meaning the flow of information is stymied. Key artifacts such as defects are not automatically flowed between tools, forcing practitioners into time-consuming, frustrating, mind-numbingly boring means of manual communications. Endless email threads, status meetings, spreadsheets and duplicate entry – like astronauts trying to talk to each other using cups and string.

We all hate admin, and it’s enough to make anyone jump ship and seek pastures new. Lands where automation reigns supreme and AI is a treasured sidekick. We loathe monotonous repetitive tasks that distract from the job at hand. Those chores that don’t add any value.

In the case of practitioners across the software value stream, all they want to do is use their expertise and skills to deliver value for the customer and their business, pushing their abilities to the max and looking damn good in the process. But they’re not going to do that if they’re trapped in administrivia hell and spending time away from their real job.

To wrap up, while there’s hundreds of awesome pieces of technology in this industry, we can’t forget their sole purpose, and that’s to help people. If tools are regularly inducing both pain and joy, then key practitioners are going to become disillusioned. Because we often feel pain more acutely, and for longer, than joy.

So make sure you focus on human interaction across your tooling landscape to turn transform your double-edged sword into a single blade to create a ‘force beyond calculation’.

ELK Stack for Improved Support

Fri, 09/15/2017 - 10:59

The ELK stack, composed of Elasticsearch, Logstash and Kibana, is world-class dashboarding for real-time monitoring of server environments, enabling sophisticated analysis and troubleshooting. Could we also leverage this great tooling to our advantage in situations where access to the server environment is an impossibility? Recently while investigating a customer support case, I looked into whether or not we could create a repeatable process to enable analysis of log files provided as part of a support case. Following are details of the approach that we came up with.

The requirements were pretty straight-forward:

  • Enable analysis of multi-GB logs provided as part of a support request
  • Use familiar, first-class tooling
  • Zero-installation usage by anyone on the support or engineering team
  • Zero maintenance (no infrastructure needed)

For this we chose ELK and Docker Compose, with the idea that anyone could bring up and tear down an environment with very little effort. Rather than monitor logs in real time however, we needed to pull in logs from a folder on the local machine. For this we used Filebeat.

This is the docker-compose.yml that we came up with:

elk: image: sebp/elk ports: - "5601:5601" - "9200:9200" - "5044:5044" volumes: - ${PWD}/02-beats-input.conf:/etc/logstash/conf.d/02-beats-input.conf - ${PWD}/log:/mnt/log filebeat: image: docker.elastic.co/beats/filebeat:5.5.1 links: - "elk:logstash" volumes: - ${PWD}/filebeat.yml:/usr/share/filebeat/filebeat.yml - ${PWD}/log:/mnt/log

This Docker Compose file brings up two containers: elk, which as you might have guessed runs Elasticsearch, Logstash and Kibana, and filebeat, a container for reading log files that feeds the elk container with data.

The filebbeat container is the most interesting one: it reads files from a local folder named log in the current directory of the Docker host machine. With the brilliance of ${PWD} support in Docker Compose, all we have to do is move support log files into that folder!

The following filebeat.yml configuration is needed:

filebeat.prospectors: - input_type: log paths: - /mnt/log/* include_lines: [".*? ERROR "] multiline.pattern: ^\s*\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d,\d\d\d \[ multiline.negate: true multiline.match: after processors: - add_cloud_metadata: output.logstash: # The Logstash hosts hosts: ["logstash:5044"]

This one is configured to handle multi-line log entries (including Java stack traces) where the initial line of each log entry starts with a timestmap. The multiline.pattern above may need adjusting to suit your log files.

All that remains to get this working is the beats configuration, 02-beats-input.conf, which uses a bit of filtering hackery to split up the unstructured log entries into structured data before it’s added to Elasticsearch:

input { beats { port => 5044 } } filter { grok { match => { "message" => "\s*(?<entry_date>\d\d\d\d-\d\d-\d\d) (?<entry_time>\d\d:\d\d:\d\d),(?<entry_time_millis>\d\d\d) \[(?<thread_id>[^\]]+)\] (?<severity>[^\s]+) (?<category>[^\s]+) - (?:(?<error_code>CCRRTT-\d+(E|W)):\s+)?(?<message_text>.*)" } } mutate { add_field => { "entry_timestamp" => "%{entry_date}T%{entry_time}.%{entry_time_millis}Z" } remove_field => ["entry_date", "entry_time", "entry_time_millis"] } mutate { remove_field => ["message"] } mutate { add_field => { "message" => "%{message_text}" } remove_field => ["message_text"] } grok { match => { "message" => "\s*(?<message_summary>.*?) Cause Context:.*" } } grok { match => { "message_summary" => "\s*(?<message_first_sentence>.*?\.).*" } } }

After creating those files I ended up with the following:

./ ./docker-compose.yml ./logs/ ./02-beats-input.conf ./filebeat.yml

With a simple docker-compose up, I moved over 56GB of log files into the logs folder and grabbed coffee. After a few minutes I was happily analyzing the situation using a Kibana dashboard:

In this example, we can see a chart of error codes and distinct messages over time.

To make this process even smoother, we used elasticdump to export our Kibana dashboards for other support cases.

To export dashboards:

elasticdump --input=http://localhost:9200/.kibana --output=$ --type=data > kibana-settings.json

To import dashboards:

elasticdump --input=./kibana-settings.json --output=http://localhost:9200/.kibana --type=data

Using ELK for post-mortem analysis of log files is a snap. The approach outlined above makes the process repeatable with trivial steps that anyone can follow, with no need to maintain ELK infrastructure.

this article was originally published at greensopinion.com

Integration Across the HPE and Micro Focus Product Portfolio

Thu, 09/14/2017 - 09:50

Hewlett Packard Enterprise (HPE) has just recently merged with Micro Focus to create the seventh largest pure-play software company under the Micro Focus brand. DevOps is a top solution area at Micro Focus, which is now supported by a much larger product portfolio. In addition to HPE’s products, the DevOps portfolio includes products from the relatively recent acquisition of Serena Software, among others.

Micro Focus’ value proposition emphasizes enabling the delivery of applications with end-to-end visibility across the toolchain composed of their product offerings. How can this newly combined portfolio deliver on visibility and collaboration? In the video linked below Tasktop Product Manager Trevor Bruner demonstrates how Tasktop connects key tools in the Micro Focus landscape including Micro Focus ALM (formerly HPE ALM), Micro Focus ALM Octane (formerly HPE ALM Octane), Solutions Business Manager (formerly Serena SBM), and Dimensions RM (formerly Serena Dimensions RM).

Tasktop is a longtime integration partner of HPE and most recently collaborated with HPE to deliver the free HPE ALM Octane I/O integration product now called Micro Focus ALM Octane I/O. This connects Micro Focus ALM or Micro Focus ALM Octane to Atlassian JIRA, CA Agile Central (Rally), Microsoft TFS/VSTS, and VersionOne. Learn more about Micro Focus ALM Octane I/O and request your free license.

We’re excited about the new Micro Focus and look forward to connecting the broad portfolio as a key integration partner delivering on the end-to-end visibility and collaboration across tools.

Lessons in adopting Docker to decouple applications from infrastructure

Wed, 09/06/2017 - 10:23

The software development landscape is perpetually changing. From new programming languages and their respective frameworks to the adoption of DevOps and other lifecycle tools, tech companies (and software-driven organizations) must always be prepared to anticipate these advancements to grab any competitive edge.

At Tasktop, the ‘newest’ tool we are becoming more familiar with is Docker, which has been experiencing a steady growth of adoption by thousands of software companies experiencing the benefits of decoupling their applications from infrastructure. Our company’s experience with Docker has grown from only a few developers using it to containerize our product, to most of our developers using it to host private repository environments such as Atlassian Jira, Gitlab and Bugzilla to test their backend code changes. We have also incorporated it into our internal infrastructure to transform our single-application VMs to Docker hosts running several applications at once.

Our adoption of Docker has been a slow burn over the course of a couple years, and the process has had its share of struggles and triumphs along the way. During my time at Tasktop, I’ve had the opportunity to learn how to leverage Docker for testing purposes and share my experience with developers across several different teams. Throughout this process, one question has stuck in my mind: how do software companies adopt and adapt to new technologies in a way that doesn’t disrupt developer cycle time? What’s the best way to pay back this technical debt?

This blog doesn’t seek a definitive answer to this question, since there is no right or wrong answer. Instead, it will look at our adoption of Docker as an exploration into this problem with a multitude of answers.

As a co-op software engineer, I believe co-ops and interns can play an important role in assisting companies to adopt and transition to new technologies. I spent my first three months at Tasktop with an experimental team of fellow co-ops and a full-time engineer/mentor with the goal of finding strategies to develop connectors faster and more effectively.

Our experience with Docker

When an engineer is developing, one of the biggest challenges is testing our code changes in isolation without affecting other team members, since we would frequently perform tests against one shared repository instance (i.e.: Jira, Gitlab, etc.). The lack of a versioned test environment also created problems for ourselves and other teams, since it was difficult to reproduce and resolve defects that could arise in previous versions of our code against specific versions of a repository.

By creating a Docker image, each developer can run and configure their own repository to test against on their computer, and allowed our team to run tests against previous versions and configurations of a repository. Since then, our demonstrated work has encouraged other teams to take time to learn Docker and leverage it for their own testing needs.

Engineering managers – for good reason – may not want to redirect their full-time engineers’ focus from fixing high-priority defects to learn something like Docker, and instead can call upon their co-op engineers to jump into the uncharted territory. This scenario has benefits for both the students and the company as a whole: the co-op students can gain the necessary skills working with the in-demand technology, and the knowledge they acquire can be passed along to full-time employees to build upon for future development.

It might go without saying, but having one or two software engineers with experience in the adopted tool is necessary for incremental change. Our experimental team benefitted greatly from having a couple engineers with Docker experience to code-review our Dockerfiles and discuss our Docker testing process. They helped us get the ball rolling, and now we are paying it forward to other teams as they ‘Dockerize’ their repositories.

Pain points

Rolling out a new technology with only our in-house experience has had its challenges along the way. At first, it was hard to facilitate necessary cross-team collaboration. There was a disproportionate emphasis on our experienced Docker developers to review new Docker code. Even documenting best practices for future Tasktop Docker developers has proved to be difficult, since each problem can be unique and doesn’t lend itself to a one-size-fits-all solution.

What about you?

What are some of your thoughts on alternative strategies when introducing new tools/technology in a software company? Is it best to look for professional training, hire more developers with the necessary experience, or try to make it work with your own talented engineers? Like our experience in developing with Docker, there are countless ways to get to your desired result. We would love to hear about your approaches to introducing and rolling out new tools, technologies and methodologies.

Further reading

How to optimize your software development co-op/internship program

Moving from a tech giant to a startup

Wed, 08/30/2017 - 11:15

In April 2017, I was let go from the company where I started my career. It had been a great few years of learning and building my career alongside friends and respected colleagues, so it was hard news to hear. It also didn’t help that it was just three weeks before my wedding, which my wife and I had spent over a year planning.

It seemed like the worst possible situation at the most inconvenient time. Then fate intervened. As one door closed, another one opened, leading to one of the biggest changes (and greatest opportunities) I’ve experienced in my life: working at Tasktop.

The transition

Since my career began at a large tech company with a major corporate structure and over 100,000 employees, it was a pretty large change to join a company that had only recently reached the 100-employee mark (now over 130 and counting). But it was a positive change.

Everything I knew about bureaucratic processes, a slower pace of work, and layers upon layers of management were thrown out. Where there was once red tape, I now have the flexibility to help adjust processes to get things done more efficiently. Those days where I had little to do and was left twiddling my thumbs? They have been replaced with a constant stream of fulfilling work that really makes an impact on my company and the products we build.

Meanwhile the chain of command for management that I was used became a management structure that involves reaching out directly when needed, or taking a walk to the coffee shop across the street to grab a coffee and chat. It’s all about efficiency, transparency and collaboration. While it hasn’t been the easiest transition to adjust from a big corporate mindset, I’ve learned to appreciate how Tasktop works as a smaller company, and the benefits the company’s approach yields.

Life at Tasktop

As I approach the end of my first three months working at Tasktop, I’ve already begun to feel like an important part of the company. Within a few weeks I was helping to make decisions and taking on increasingly important tasks. While a larger company might ramp up a new employee for a few months, I felt I was heavily involved and a valued team member from the outset. I was able to skip the “new guy” phase and was immediately accepted as a fully-fledged Tasktopian. Not only did the overall culture of Tasktop play a part in this, but also the great people that I am lucky to work with every day.

Significantly, the work I do actually matters and helps the growth of the company. And that’s not just me or my team, that is everyone at Tasktop. We are all in this together and we work hard to make our products into something we can be proud of. I’m very excited about Tasktop Integration Hub and the impact it is making on large-scale software delivery. While a large company has its own benefits, I’ve never felt this passion for my work before. Everything we do matters and is valued, no matter how small.

My Future at Tasktop

My time at Tasktop so far has flown by. It feels like just yesterday that I had taken my first step through the office doors. However, some days I feel like I’ve always been here. These first few months have been such a wonderful experience, and it has made me very excited to see how Tasktop grows in the future.

Even though we are a smallish company for now, we have big ideas and big plans. And while the circumstances that brought me to Tasktop weren’t ideal, every day I am thankful for that situation as it led me to making one of my best decisions I’ve ever made. Not to mention helping to ensure that our wedding was an unmitigated success!

Interested in working at Tasktop? Check out our current openings and join our ever-expanding team to continue our mission to fundamentally transform how organizations plan, build and deliver quality software at scale.

Tasktop Connect 2017: Speaker Program Unveiled and Accommodations

Mon, 08/28/2017 - 10:37

As we approach the tail end of summer, it’s only natural to feel a bit sad.  Farewell beach days, BBQs and lazy drinks on the patio, hello autumnal jackets and staring nervously at the thermostat. But it’s not all doom and gloom – Tasktop Connect 2017 is on the horizon (4th October 2017, Columbus, Ohio), bringing with it, we hope, an Indian Summer.

The Speaker Program

With little over a month to go (well, 35 days, 18 hours, 30 minutes, and 43 seconds – but who’s counting?), we’re delighted to provide an early glimpse into the carefully curated speaker program that is jam-packed with expert insights into optimizing your software delivery and scaling your Agile and DevOps transformations.

Sessions include:

Click here to see the current agenda and for further information on each session. Please note, we’re still refining and updating the program, so keeping checking the agenda in the coming weeks. With a panel discussion and additional insights from the Tasktop experts, the speaker program covers all your large-scale software delivery needs.

Where to stay?

Now before you begin your registration (with a limited offer 20% discount using code: CONNECT17), let’s talk housing.

When I asked Tasktop pre-sales engineer and Columbus native, Jeff Downs, if we could all crash with at his place, he was psyched. But once we calculated how many sleeping bags, air mattresses, bunk beds and cups of coffee we’d need to accommodate everyone, we moved to plan B.

If you book now, you can secure a room at the Courtyard Marriot Downtown Columbus at a discounted rate of just $129 per night. The rate is only secured until 3rd September, so book now to avoid disappointment.

In addition to helping us with accommodation, Jeff has also written about why Tasktop Connect is happening now and why Columbus.

Have more questions about Tasktop Connect? Drop me a line at laurel.heenan@tasktop.com and I’ll look to help you in any way I can.

Don’t forget to register and we look forward to seeing you in Columbus, which promises to be a seminal moment in enterprise-level software delivery.

Traceability – the key to process improvement

Wed, 08/09/2017 - 10:23

In software development, we’re always working to improve the way we plan, build and deliver software. The most popular ways we do this are often by applying automation, DevOps or Agile concepts – or a combination of all three.

However, unless we have a traceable process, something is always going to get missed and/or we’ll have to reinvent the wheel each time the same situation arises. In that regard, traceability is the linchpin of process improvement.

Process is great but a double-edged sword. It streamlines work, makes it more efficient and democratizes work. With process, there is not just one person who can do a job. Anyone who knows the process can pitch in and help out. But there is also an ugly little secret…

Process is awful! It almost always involves different teams with different priorities. It invariably involves multiple people. It doesn’t respond well to change. And it takes time to get it right. Understanding the ultimate goal of a process improvement initiative may not be immediately obvious to everyone and without a clear goal, your initiative can fail.

Tasktop sits in an interesting position within an organization’s process improvement initiatives. We help different teams coordinate, but only after they agree on their new process. During our sales process, we proactively help our customers come to agreement on what they want. We like to call this the marriage counselling portion of the cycle. The physical implementation of their integration is usually the easy part. Getting different teams to agree is much harder.

And we’re no different.

Tasktop uses Tasktop Integration Hub to integrate:

  • Our CRM tool
  • Product management tool
  • Agile tool

By practicing what we preach, we understand our customers’ pains and we’re using our own tool to help solve them.

Here’s what I’ve learnt from our ongoing efforts to crystallize some of our internal processes:

Start Small

Don’t boil the ocean. Identify the smallest, most impactful bit that needs to be addressed first. Get a win between teams. This builds momentum and encourages continued effort. Without ongoing buy-in from the various stakeholders, your initiative will stall. As an example, we didn’t start out with a full integration of our CRM tool, Product Management tool and Agile tool. We started out by synchronizing Features from our Product Management tool to our Agile tool. Only after that process was running smoothly, did we feel comfortable to scale up to include other tools.

Start Inefficient

Start off manually. Don’t automate. As Michael Hammer said “Automating a mess yields an automated mess”. Yes, this will take a certain amount of energy (some will say “waste”…). There will be manual updates. People will complain. It will take a bit more time in the beginning, but think of it like an investment. Before we began automated integration of our CRM tool, we had our field email in requests. It took time to pull these into our Product Management tool. It wasn’t the most efficient method, but it taught us a lot. The additional time spent now can yield big dividends in the future. Why? Read on…

Iterate Quickly

Starting small and inefficiently enables you to change your mind. Not only will you be able to see where you can make improvements, but the upfront costs aren’t that high because you didn’t spend months building out an automated process (the ideal, but misguided solution). Instead, you documented the process (very lightweight) and when you see areas for improvement, you don’t have to rebuild anything. You simply update the documentation. You’ll have lower overheads, so you’ll have fewer objections to altering the process. So we quickly realized that the emails being sent in didn’t have enough information for us to properly triage them. We provided email templates and changed them many times until we were all happy that they provided the necessary information. Changing up an email template was quick and didn’t cost anything. We were able to make changes quickly.

Now Automate

Now you’re first process is working nicely. You’ve iterated and verified that it fits the needs of your stakeholders. You’ve confirmed that it works within the bigger picture. Now it’s time to automate. Now you can build out your tools. You can implement a more sophisticated integration strategy (a la Tasktop). Now that we were all happy with the necessary information in the Request emails, we could ensure that our CRM tool and Product Management tool had the proper fields set up to allow a smooth integration of that information.

Scale Up

Now it’s time to identify the next little bit you need to bite off. See how it folds in with the previous process. Does it change what you’ve been doing? If so, you can pivot quickly. If it doesn’t, that’s a sign that your first process is working as designed. Keep the momentum going. As I said before, the first part of this process was to integrate Features between the Product team and our Engineering team. We automated that well before tackling the Requests coming from our field. The most important part of scaling is to realize that as your processes expand, you have to follow the same methodology. Start small, start inefficient, iterate quickly and only then automate.

A process takes time

I could talk about the ugly underbelly of certain processes, but it would take too long to explain and Randall Munroe has already done a better job than I could to illustrate my points.

And finally, I’m reminded of the story of the architect who waited a year after constructing a building complex before installing sidewalks. Sure, people got a bit muddy. There wasn’t a clear path to go from point A to point B, but people figured it out. Then a year later, the “process” of walking between buildings was formalized with cement. You know there were some people upset by muddy shoes. People had to walk across wet grass from time to time. There was some cost to that, but that cost ends up paving the way for the best overall solution (I make no apologies for the pun). Now I don’t know if this story is true or not, but like the best stories, it’s truth doesn’t come from whether it happened. Its truth comes from its wisdom.

Creating a good and sustainable process doesn’t happen all at once, it takes time and it’s, well…it’s a process.

Speak to us today to see how we can help you define your process and integrate your best-of-breed tools to greatly enhance your organization’s software delivery.

 

No Tool Is An Island: How We Dogfood Tasktop to Build DevOps into our Business

Tue, 08/01/2017 - 06:34

As an organization becomes more complex, we find an increasing degree of specialization in functions and roles. These bring the depth of knowledge and expertise required for managing complicated business functions.

Along with these specializations comes a need for the domain specific tools that are best suited to the work being performed. To enable our teams as much as possible, we want to give them these best-of-breed tools – the ones they want to use and will allow them to perform at the highest level possible.

The effect of this functional specialization and custom tooling also has negative consequences though – it allows islands of productivity to form that are difficult to connect and leverage for synergies. They prevent the organization from operating as a single integrated system.

At Tasktop, we have witnessed this effect first-hand as our organization has grown from 3 to 125 people and taken on multiple rounds of funding.

There have been several key inflection points where our organization has had to make an investment in specialization, each producing more productive teams while simultaneously creating a more fractured organization.

We have been willing to make this decision because the benefits of specialization are so great that they outweigh the negative cultural impacts of allowing islands to form. At the same time, we also have a plan in place to pay back this “cultural debt” through a strategy of integration, building the bridges needed to connect the islands in our value stream.

In our recent webinar, we highlighted some of the key pages from our integration playbook that have brought a tremendous amount of value to the organization. Briefly, they are:

1. Field Request Pipeline

This “first-mile” integration is between Salesforce, where our people in the Field are logging customer requests, and TargetProcess, where Product Managers are grooming the company backlog. It allows a Request object in Salesforce to flow into the Product team’s tool, and keeps the information in sync so the Field can stay up to date on the status, and even communicate with the Product team using the artifact.

2. Product Feature Pipeline

Once the Requests are in TargetProcess, they need to be triaged into Features and assigned to the right Engineering Team. This integration between TargetProcess and JIRA flows those Features over into Epics on the appropriate teams’ backlog. Again, synchronization means that the information in both tools is always accurate and up to date, and communication can easily take place right in the artifacts, negating the need for email and Slack.

3. Code to Story Sync

Once teams have broken down their epics into stories, they need to keep track of what code has changed to execute on the story being worked on. Our integration between JIRA and Gerrit uses a post-commit hook in Gerrit to link the relevant code reviews to the story. This provides developers insight into questions like “What code changed as part of this Story?” and “Which features were in which release?”

4. Asset to Plan Sync

When working on a story, developers will often request resources from IT. Our integration between our infrastructure and JIRA automatically creates an asset record in JIRA for each resource we have deployed and keeps the information about it in sync. This allows us to link resources directly to Stories and manage our infrastructure with a high level of visibility into the work that is being performed on it.

5. Value Stream Reporting

This integration allows us to leverage the tools in our value stream for business intelligence. We can funnel all the information about changes on every artifact, in real-time as they occur. We then point a business intelligence tool at the database and visualize live insights on our information radiators.

These integrations have resulted in a large reduction in waste and increase in productivity across the organization. They allow our relatively small business to service the largest banks, insurance, retail and manufacturing organizations in the world efficiently and effectively. They are also just the beginning of our internal journey into end-to-end value stream integration.

For the whole story, please check out the on demand webinar where we go into full detail about the specialization effect and how we use integration to resolve issues and enhance our value stream.

Top reasons to attend Tasktop Connect 2017 (including Early Bird Discount)

Thu, 07/27/2017 - 10:16

By now, we hope you’re aware that we are hosting our first conference, Tasktop Connect, on October 4, 2017 in Columbus, Ohio.

At Tasktop Connect, leaders from some of the most successful IT organizations such as Comcast, Nationwide, Lockheed Martin, and others, will share their best practices and lessons learned from undergoing large-scale Agile and DevOps transformations. Leaving no stone left unturned, they will be joined on stage by industry experts such as Gene Kim, in what promises to be the definitive ‘lowdown’ on the software development and delivery space.

If our speakers alone are not enough to entice you, let me share a few more reasons why you should attend:

Transformations Aren’t Working

IT trends come and go as quick as the seasons, and organizations are investing in new staff, tools, and processes to keep up. And yet even with all this investment, software development and delivery still isn’t fast enough, or effective enough, to drive the change the business wants. With 10+ compelling sessions that cut cross a range of industry verticals, attendees will gain invaluable insight into how leading enterprises have started to turn this around, making software development and delivery a competitive advantage for their business.

Customer use-cases

In addition to presentations from Tasktop CEO and co-founder, Mik Kersten, and Tasktop VP of Product Management, Nicole Bryan, Tasktop customers will be sharing their individual journeys towards an integrated value stream using Tasktop’s Integration Hub “Tasktop. While your organization may be using Tasktop to integrate your PPM and BA tools, another organization may be using the platform to connect Developers to Testers. Despite varying business context, you’ll learn how you can unleash your organization’s full software delivery potential with technology and support. We promise you will leave inspired and ready to implement the tactics at your own organization.

On-site Support

If you’re a Tasktop customer or partner, you already know the Tasktop Support team are real MVPs. Lucky for attendees, the team will be on-site and ready to answer all your technical questions – no matter how big or small. You’ll learn how to optimize your software delivery process to extract the most value for your business from the crème de la crème in the industry. What could be better than that?

Grow your network

Just as Tasktop brings together your software delivery teams, tools, and value stream, this customer-focused event will connect you with the software delivery community; Agile and DevOps industry experts, fellow customers and partners, and Tasktop’s ‘go-the-extra-mile’ product gurus. With plenty of coffee breaks, lunch, and evening entertainment, attendees will have the opportunity to engage with like-minded individuals from around the world (supplemented with plenty of delicious food and beverages!) for a truly fulfilling day and night.

Still not convinced? Contact me or one of my colleagues to find out exactly what Tasktop Connect can offer you and your business.

But just to push you over the registration edge, here’s a discount code for 10% registration: TTCONNECT.

Register by August 7th to secure our Early Bird discount rate.

We can’t wait to see you in Columbus!

P.s. Want to know why we chose Columbus? Check out our blog by Tasktop’s pre-sales engineer (and local Ohio-boy) Jeff Downs, who explains why our inaugural event will take place in ‘The Arch City’.

Support for Gitlab, ServiceNow Express and Modern Requirements4TFS Now Available

Tue, 07/25/2017 - 14:50

Increasingly we’re seeing large organizations that want to pick and choose the best-of-breed tools that combine to create a customized tool chain that supports their specific software development and delivery needs. So we’re pleased to announce that today we’ve added support for Gitlab, ServiceNow Express and Modern Requirements4TFS, expanding the options available when creating a modular tool chain.

Here’s some of the benefits for integrating each tool:

Gitlab

By connecting Gitlab with ITSM tools such as ServiceNow and Zendesk, or Agile Planning tools such as CA Agile Central, JIRA, LeanKit, or VersionOne, you enable better visibility across your organization with information surrounding the issues being kept up to date in all systems.

For example when integrating GitLab Issues with Agile planning tools you can:

  • Ensure your teams are aware of the progress of relevant issues, regardless of which tool they’re using.
  • Facilitate automated communication between teams by allowing inter-team communication and collaboration with up-to-date information about the issue
  • Allow Gitlab labels to flow to your Agile planning tool of choice, helping to organize your team’s workflow
  • Let developers communicate back and forth via comments.

ServiceNow Express

Tasktop has long supported ServiceNow Service Desk, ServiceNow SDLC and ServiceNow PPM, and the addition of ServiceNow Express to our supported tools expands the benefits of Tasktop to users of this product. ServiceNow Express users will now be able to get the same cross-tool traceability and reporting benefits when they connect to other tools used in their software development and delivery organization.

See an example of how you could connect ServiceNow Express and Gitlab Issues in this demo video:

Modern Requirements4TFS

Many organizations, particularly those building for Microsoft platforms, use Modern Requirements4TFS as their go-to solution for Requirements Management. All requirements are stored natively as work items in TFS / VSTS. They may also use TFS for development work item tracking, testing and release management. However, when members of the extended team adopt specialist tools, for service desk management or project / portfolio management, for example, the work items in all tools can be kept current by being synchronized.

Tasktop allows you to flow artifacts to & from TFS and Modern Requirements4TFS, such as user stories, requirements, and test cases, to the myriad third-party tools that your other teams may utilize.  Updated statuses, due dates, owners, comments, and attachments all flow seamlessly from one tool to the other, to break down communication barriers and enhance cross-team collaboration.

For more details on all the new features available on Tasktop Integration Hub visit our What’s New page.

 

How to optimize your software development co-op / internship program

Thu, 07/20/2017 - 09:26

Co-ops and internships for students and young graduates have become very commonplace in the software industry. Beneficial for interns and companies alike, an internship program can help budding software developers to enhance their skill set while contributing to the success of the business.

As a software engineering co-op at Tasktop, I’ve witnessed first-hand what it takes to execute a successful internship / co-op program. Below are some key benefits of hiring interns and some key considerations that can help you determine whether it’s the right program for your company, as well as the key benefits it can yield.

What do interns / co-op programs offer?

  • Learn by doing

Hiring interns really helps the software industry grow. Internships are important to the success of the careers of aspiring software developers, and gaining work experience early on can help interns pinpoint what they really want out of their career. Hiring interns enables them to ‘learn by doing’ in a professional environment, which helps produce better software engineers that can drive the industry forward.

  • Fresh perspective

Interns provide the kind of diversity in thought that relies on a fresh pair of eyes. Interns are unique as they have little to no prior industry experience and limited knowledge of the standard conventions that take place in most software companies around the world. While this means they have a lot to learn, it also means that they don’t have a bias towards traditional systems, and can be useful in recognizing overlooked problems. Without a familiarity bias, they may find new, and better, ways of doing things. They might even discover a new process that could make the entire team more efficient.

  • Develop your own developer

Interns don’t need to be temporary. With the right support and room to grow, you can train a skilled developer that is not only schooled in your approach to software development, but also one that becomes a loyal brand advocate too. Furthermore, when an intern becomes a full-time employee, you negate the need for onboarding, saving your company time and money.

  • Company exposure

There is also an element of exposure. If you are running a software company that targets niche markets, then there’s a chance that many people outside of your sector will not have heard of you, making it harder to attract talent. Software engineers like working in environments that they know will challenge them and help them to grow. The easiest way for them to find such environments is searching for jobs in companies that they are already familiar with and who have already built up good reputations. Hiring interns will increase the amount of people that have worked at your company by virtue of them being cycled out every eight months, and the more people who have worked at your company, the more chances your former employees will tell other software engineers how great it is work there. This can really help boost your company reputation and talent acquisition capabilities.

What are the key considerations for running your own internship program?

  • Supervisor resources

Do you have the resources to supervise and support a co-op intern? Without strong support, they have the potential to accidentally produce a lot of inefficient, unmaintainable code. To ensure this doesn’t happen, implement a mentoring program that is overseen by strong base of full-time software engineers that are willing to help and provide feedback to steer interns in the right direction.

Tasktop does a fantastic job of creating a strong foundation for their co-op program, with full-time experienced employees easily accessible and willing to spend time with helping new hires.

  • Code review

A rigorous code review process, like we have at Tasktop, prevents writing code that other engineers won’t be able to maintain in the future, and it helps interns get up to speed quicker by enabling them to look at code reviews from other engineers. Having a code review system is especially effective as a form of mentoring because interns now have the ability to teach themselves without requiring dedicated time from full-time engineers to continuously walk them through the code review process.

  • Onboarding

Without an efficient onboarding process, hiring co-ops and interns can be time-consuming for both the company and the intern – especially if you’re hiring new interns every eight months like Tasktop does. You can save time by hiring interns in large groups and training them simultaneously rather than one-on-one, which encourages questions and group activity, as well as forging a team spirit right from the beginning.

Conclusion

Hiring interns is a complex endeavor, but one that can potentially have a great payoff for your company. As a Tasktop co-op, I’ve expanded my skill set, learnt how to overcome new challenges, and feel I have played a vital role in changing the software development and delivery landscape.

If you’re interested in working at Tasktop, please view current openings on the careers page and/or contact us. Do also get in touch if you’re thinking about running your own internship / co-op program too, we’d be more than happy to help!

Optimizing collaboration between Software Engineers and Product Owners by integrating JIRA and Targetprocess

Tue, 07/18/2017 - 09:57

As Software Engineers, we tend to get lost within our ivory towers. We are logical people, and crave logic in the world around us. The real world around us, though, is messy and frustratingly illogical. I am happiest, and most productive, when I have a well-defined Epic to work on, with clear requirements and acceptance criteria. My team and I can silently work away on the code, and resurface into the world when the job is complete.

The problem, though, is that the requirements are not always clear, and the acceptance criteria is often incomplete or missing altogether. This is where that messy, outside world invades our ivory tower. Locating the missing information and context, and communicating with Product Owners, was often a tedious mix of meetings, emails, and frustrating demos. I often felt that half of my job was to be the Product Secretary, keeping track of all the decisions, requirements and acceptance criteria on the original Epic.

As it turns out, I was duplicating all of this information. The Product Owners have tools to keep track of the requirements and acceptance criteria, and we work in our own tools for development. Since we don’t both access each other’s tools, we were forced to keep our own records of everything during meetings and update our own copies of the Epic.

The obvious solution was to be granted access to each other’s tools. In the Engineering department, we are using Atlassian JIRA, with the Agile tools for Kanban. This provides an excellent view into the current state of our project. Meanwhile the Product team uses Targetprocess, a tool designed for project management. Both teams are happy with their respective tools, and have built internal processes that fit with them. However, this satisfaction didn’t extend to collaboration.

Even with access to the other team’s tool, we Engineers still ended up manually duplicating most information into JIRA as there’s no synchronization between the tools. I quickly became frustrated at the constantly changing feature boards in Targetprocess, and having to switch back and forth between the two tools. And since not all of the Engineers had access to Targetprocess, I had to copy everything into JIRA for any fellow Engineer who required the information.

The Product Owners also had their struggles with JIRA.  Our Kanban boards tend to be an undulating pipeline of work items, only a portion of which are the Epics and Stories they are familiar with. The rest of our boards contains Defects, Tasks, and Technical Debt filled with overly complex technical details.

What’s more, by granting access to each other’s’ tools, we had created an even larger issue. Neither side used the other’s tools unless prompted, yet both sides assumed their changes were now visible to the other team. Collaboration actually got worse.

This is where Tasktop Integration Hub came in. We set up Integrations between Targetprocess and JIRA to synchronize Epics (called “Features” in Targetprocess). This meant that any change I make to the Epic are instantly updated on the Feature in Targetprocess.  When I want to ask questions, I just comment on my JIRA Epic. The Product Owner can then reply in their Targetprocess Feature.

Now I am able to stay within JIRA, my tool of choice, and I no longer have to be the product secretary. Even better, the integration removed the need for many of our time-consuming meetings. For example, when an Epic falls out of the release, I can just change the associated version number in JIRA. This change is then instantly updated in Targetprocess, sending configured emails to the Product Owner.

I can now return to my ivory tower and become productive once again…

If you’re working within the software value stream and want to know how Tasktop Integration Hub can dramatically improve the way you work with all other practitioners in the lifecycle, contact us today.

How can DevOps Integration transform your role as a Project Manager?

Tue, 07/11/2017 - 11:34

Before joining Tasktop, I spent several years as a Project Manager working with non-profit clients. During this time, one of the biggest obstacles I faced was overcoming the communication barriers between separate teams at our organization.

So when approached to co-host a webinar with cPrime on the top challenges facing Project Managers, I jumped at the opportunity. In the webinar, Brian Mulconrey – cPrime Agile Coach – walked through the ways that Agile Program Management can transform your PMO challenges into opportunities through continuous communication and delivery. While listening to his excellent presentation, I thought “All very true…but there’s one crucial piece missing!”

And that missing piece was DevOps Integration, the next significant milestone in software delivery. By looking at the software delivery process from a value stream perspective – i.e. as a sequence of activities that design, produce and provide a product and/or service – we begin to see where value is being created and lost to optimize end-to-end production.

This state can only be achieved by connecting the DevOps side of the software delivery pipeline with the rest of the software lifecycle for visibility, traceability and governance over the value stream – the holy trinity of requirements for first-class project management.

Common challenges

As a Project Manager, it is your job to straddle several different worlds, working with software developers, QA teams, business analysts, technical writers and more. Each team likely uses separate tools, and has separate internal processes and policies that you, as the Project Manager, must learn to navigate. Without an integrated toolchain (as these disparate tools do not naturally integrate), this can be an exhausting challenge that can test the sanity of even the very best Project Managers.

I used to spend countless days (and even weeks!) waiting on IT to grant me access to the many tools I needed to communicate with each contributor on my team. Once I gained access, I would spend hours watching training videos to learn how to use the tool. And even then I’d get in trouble for submitting requests the wrong way as each team had different practices and policies within their tool.

Even when I was able to access and use each tool, I could lose up to one working day a week. To put it simply, a fragmented toolchain is a costly and time-consuming endeavor that means a project manager is doing more admin than management.

Consequently, balls are dropped, serious issues are missed and avoidable mistakes are made. Lack of integration puts the success of all projects under threat, which ultimately means lost business if customers do not receive the right product or service on time. Not to mention the impact on job satisfaction of talented Project Managers who may lose patience and move on to a better, more connected environment…

A Project Manager’s Nemesis: A Fragmented Value Stream

DevOps Integration allows you to connect the disparate activities occurring in separate tools into one united value stream by connecting those tools into a modular toolchain. Wouldn’t it be great if you could automatically flow information from your tool of choice (maybe a PMO tool such as Microsoft Project Server) in real-time to the other tools that your team members were using? No more double entry into multiple systems, no more twiddling your thumbs while you wait for IT to grant you access to yet another tool, no scavenger hunts for important data.

This is all achievable through true DevOps Integration. It’s much more than just connecting your development and operations tools for improve collaboration when building and delivering software. It’s all about connecting those critical teams and tools with the whole value stream to optimize the entire process, from ideation and planning to testing and customer feedback. The end result? Fast and efficient continuously delivery of awesome software.

This is how:

  • Each of these purpose driven tools are connected to one another, and information is able to flow seamlessly between them
  • When you change a deadline, or the owner of a task, or details on a change in scope communicated to you by a customer, you’re able to easily flow that information to your Business Analyst in their own Requirements Management tool so that they can update the requirements for that deliverable
  • Once your Business Analyst has updated the requirements for that deliverable, those details can flow to the tool your software engineers are using to track feature development
  • Once your developers complete work on that feature, they can flow that information to the QA tool your testers are using
  • You can even take all of that information from each tool and flow it into one central database so that you can run your own analytics to identify bottlenecks and high-level patterns that may be impacting delivery for your customers

Proof of Concept

Here’s an example from our own workflow here at Tasktop:

  • When a customer requests a new feature for our product, one of the first steps that our Business Analyst takes is to determine – in collaboration with our engineering team – if that request is technically feasible within our product
  • To do that, our Sales and Professional Services team submits requests in their tool of choice, Salesforce. Those requests then flow over to our Business Analysts’ requirements tool, Targetprocess
  • The Business Analyst can then check a box on the request to initiate a technical investigation
  • Once that box is checked, the request will flow over to JIRA, the tool that our developers use. That new JIRA artifact will then automatically pull in to our developers’ triage process during their daily stand-up call

As you can see, by utilizing their existing processes and tools, we are able to facilitate continuous communication between key players across the value stream and speed up our software delivery to deliver powerful results for our customers.

DevOps Integration with the rest of the lifecycle is vital as there is no single tool platform that provides a silver bullet. While each team benefits from using best-of-breed tools that are built for their specific goals, without enterprise-level integration, all the project-critical information that is created for the sole purpose of being shared with other teams is siloed. Communication and collaboration suffers, while Project Managers are unable to see bottlenecks and trace the flow of work, meaning they can’t make informed decisions that will directly impact the success of a project.

With Tasktop’s DevOps Integration technology – which delivers the best results and Total Cost of Ownership on the market – Project Managers have a dynamic and simple means to connect all tools, teams, and disciplines across an organization, obtaining a holistic overview of the whole value stream. The result is an omniscient and empowered Project Manager, able to focus solely on their job i.e. managing projects and ensuring that the value stream is flowing and consistently delivering value.

If you’d like to learn more, watch the full webinar with cPrime: ‘5 leading challenges facing PMOs – and how Agile Program Management changes the game.’

You can also check out the webinar ‘Eliminating the PMO Scavenger Hunt’, as well as download our short e-book on the same topic.

If you’d like to know more about Tasktop’s DevOps Integration technology, contact us or request a demo today. Say goodbye to soul-crushing admin and hello to smooth project management best practice.

Get Integrated for Free with HPE ALM Octane I/O

Fri, 06/30/2017 - 13:21

If you’re an HPE ALM/QC or HPE Octane user you’ll know that to get true traceability and reporting you need to integrate it with other tools used by software development teams, and there are a variety of ways you can do that.

There’s now a new offering available to you – HPE ALM Octane I/O – which offers integration to JIRA, TFS, CA Agile Central/Rally and VersionOne. And the best part is it’s free for the first 100 users! Leveraging HPE ALM Octane I/O allows you to put HPE ALM/QC or HPE ALM Octane at the center and flow different artifacts such as requirements, epics, stories and defects back and forth between the various systems in real time.

If you’re an HPE ALM/QC user, you might already be familiar with the variety of integration offerings that are available including HPE Synchronizer, HPE Octane Synchronizer or the HPE Next Gen Synchronizer. You might have even tried them at some point. What makes this new offering different is the tools you can integrate with. HPE Octane I/O integrates with Version One and CA Agile Central (in addition to JIRA and TFS), and all versions of those tools will be supported. It’s powered by Tasktop, which means it can support enterprise-grade integration requirements – it’s technology used by almost half the Fortune 100.

Let’s look at one example of how this could be used – connecting HPE ALM with JIRA. By integrating these two tools with HPE Octane I/O, you can automatically flow epics and stories from JIRA to HPE ALM so that testers can see the requirements right in their tool. The integration also synchronizes relationships so that you have traceability between parent and child requirements. Comments or questions logged in HPE ALM  are automatically sent back to JIRA to that developers can see everything in their tool. Once test cases are created in HPE ALM direct coverage status can be sent to JIRA so that iteration managers can see the details of the user story and understand status as it pertains to the testing effort. URLs and artifact IDs can be passed back and forth between the two tools to add even more traceability.

Integrating these tools together not only eliminates manual handoffs and automates traceability, it also allows you to leverage functionality within HPE ALM that you might not have been able to previously. Maybe you have people that are manually importing requirements into HPE ALM, maybe you’re not using your requirements module at all. But with the real-time requirement integration running you can turn on alerts in HPE ALM.  The result: changes to stories in JIRA or other Agile tool will update the requirement in ALM plus it will notify the owners of related test cases that a review is needed.  Testing the correct version of the requirement ultimately means better releases and happier customers.

So how do you get started? Simply request a license key from Tasktop through this form. You’ll then be able to get the software directly from HPE. For more information, view the on-demand recording of the Vivit webinar HPE ALM Octane I/O Enterprise Grade Integration with your 3rd Party Agile Tools.

Routes & Ladders

Tue, 06/13/2017 - 10:20

The typical ladder has nice, evenly spaced rungs to get you from the ground to whatever high spot you have your sights set on. But what happens when the top rungs are missing? You can get part way up, but can’t go very high.  Or what if all of the bottom rungs are missing? It’s hard to even get off the ground.

This is why fully functioning role model ladders are important for women in business. Meaning women at every level of an organization need to be ready to help other women. They are the “rungs” that can take women from the bottom to the top, and every step between.

Why not an elevator that shoots you right to the top floor? Because having a rung at every level, having role models at each step along the way, helps make the goal seem more attainable. In other words, the women you aspire to be need to be close enough in scope and age, so that you can relate to them.

Don’t get me wrong… heroes are great. We all want to shoot for the moon, but the reality is that our role models have far greater and more direct impact if they are one rung above us–within our reach.  If someone is doing a job that you can clearly envision and filling the role in a concrete, realistic way, it’s easier to picture yourself in that role.

Let me tell you a story. My ten year-old daughter was tasked with writing an essay about her biggest role model.  She picked Mara, one of the young women on my team at work. She selected Mara because she started on my team as an intern while she was still in college, and she would come over to our house and talk about her job. Bailey would hear me talk about how talented and hard working Mara was. But I talk about the amazing people I work with all the time.  Why did she pick Mara?  Because she can relate to Mara. She can see herself in Mara. And the rungs continue. I am Mara’s role model–a female VP of Product Management.  My job feels attainable to her.  And I look up to Gail, our Chief Science Officer.  Every rung matters.

If you agree that “Role Model Ladders” are critical or women in the workforce (or hoping to be part of the workforce), how do you create them?  One step at a time (pun intended). Step one: you need to talk about it. Direct discussion is critical to address any elephants, or lack of elephants, in the room.  When you see a missing ladder rung in your organization, go to your HR department or your department head and highlight it for them. Let me offer an example. I once called our Senior Director of Engineering and said, “Do you realize we don’t have any female engineering managers? Next time we’re hiring, let’s focus on that.”

Here in Austin we have the University of Texas—a big and great university. We also have under recognized resources like Texas State, Saint Edwards University, and ACC. I first posted a job opening on the UT job board. Then I met a female professor at Texas State, so I had a direct local connection.  When all I got was resumes from men, I called her and said, “I know there are talented women at Texas State. Can you encourage them to apply for this position?”  And guess what?  The woman she encouraged to apply is the woman that my ten year old looks up to.  It’s local.  It’s personal.  My company isn’t changing the world in broad strokes. We are affecting real woman (and the world) in small ways. But if every organization does the same, it will undoubtedly have a broad stroke effect.

By the way, it is ok (and not illegal) to shoot for diversity – to target under represented people. But it may take more time and more energy. I was at a dinner party a couple of years ago with a man who had founded a startup.  He said to me, “I don’t have time to wait. I need to fill positions. Ten people walked in the door and they were qualified, so I hired them. If they had been women, I would have happily hired them, but they weren’t.” If we want this to change, we all have to be willing to make the effort to seek out and hire women.  Founders of small companies have to expand the search.  Doing the right thing for women is undoubtedly doing the right thing for your company–more growth, more success.

It’s also important to note that sometimes women on the middle rungs of the ladder slip off. Often, the people who fill the middle rungs are at an age where they are starting families. Too often they don’t climb back on the ladder because there are too many obstacles to overcome. Truly innovative companies don’t let this happen to talented team members.

I’ll never forget standing outside the London Tube when my son called in tears because he lost an important mock trial at school. I looked at my colleague, a young woman who is not yet a mom, and said, “I have to talk to my son right now.” She watched me as I tried to console him. Yes, I was on a business trip in Europe, but my family comes first. Seeing the reality of how you balance these situations can be crucial for many young women.

While we are at it, let me tell you what middle and upper rung women should not do. A female VP of Engineering did this to me when I was on a lower rung looking up to her.  She said, “You just suck it up and deal with it.” Right before I gave birth to my child with a laptop next to me.”  She was proud of that. Don’t do that to women colleagues.

Instead, the women filling those higher rungs must be willing to open themselves up and show the personal side. Let the people around them witness the juggling act.  Let them see the moments that are crazy and difficult.  Let it be personal.  We have great heroes like Sheryl Sandberg who write books about what it is like, but I can guarantee you that the colleague who stood with me outside the London Tube and listened to me cry with my son will not forget that.

It is only through small, intentional steps that we can change things for women in the workplace. Tasktop is doing it. Your company can do it. And, I guarantee that if your daughter comes home and says that she is writing about a woman in your ladder, you will feel exhilarated and hopeful about the future.

Managing Open Source Effectively

Tue, 06/06/2017 - 11:33

Unless you are a developer who enjoys reinventing the wheel (and spending countless hours or weeks developing functionality that already exists), it’s likely that you use open source dependencies in your projects. While it is great being able to easily incorporate functionality that you didn’t write into your application, such as user management, it does not come without risks.

Two of the major risks are licensing and security vulnerabilities, both of which are exacerbated when an application grows. When developing an application, your transitive dependencies (which are dependencies that your direct dependencies require) are increasingly hard to track down because there can be many layers of dependencies. Finding all of them can be an onerous task as they are often hidden inside packages.

Considering your direct dependencies rely on transitive dependencies, this is a serious concern for developers using open source. In this piece, we will explain why improving open source software visibility is critical for managing the risk associated with licensing and security vulnerabilities.

Licensing

Licensing of open source software may seem like a simple task when only a few dependencies are being used, but this task can quickly turn into a nightmare when you dig into the details. Not only does one need to keep track of all the direct and transitive dependencies in the product, but one also needs to determine what license is associated with every dependency, which is no easy feat.

Maintaining this list of dependencies and licenses is a tedious task, made even harder by the fact that finding the license for a dependency is rarely straightforward. Based on our own experience with our internal tool, we have come across several problems with this approach.

For instance, finding all direct dependencies along with transitive dependencies requires effort in maintaining scripts that ensure every dependency gets recognized. Then, once a list of dependencies is established, carrying out due diligence of each license can be a laborious endeavor because:

  • Not all bundles have licenses included in the package or stated within the included files
  • Other packages have multiple, sometimes conflicting, licenses
  • Other packages can be downright confusing, such as when a different license is stated in the bundle than what it is in the source code e.g. we have seen cases when the reported license is Apache 2.0, while the source code contains references to GPL 2.0…

Incorrect licensing can have significant effects on a commercial product, since a copyleft license such as GNU General Public License 2 without any exceptions can require the product’s source code to be freely available. So for instance, if you incorrectly report a package to be licensed under Apache 2.0, but it turns out that it was GPL 2.0, you can suddenly find yourself managing an open source software that was previously a commercial product. So ideally we want to discover the dependencies with unacceptable licenses and replace them as soon as they are added to the product.

Solving licensing issues

One tool that enables easy management of dependencies, their licenses and their known security vulnerabilities is Sonatype’s Nexus IQ Server. IQ Server is a web application that automatically recognizes components, or parts of components, included within a product and provides information regarding licenses and security vulnerabilities for the discovered components. The application allows for easy bookkeeping of dependencies along with observed and declared licenses, as reported by IQ Server, and licenses that are deemed to be the correct or effective by the user.

Furthermore, policies can be set on IQ Server to define which licenses are acceptable and which unacceptable, allowing developers to be notified as soon as a product containing unacceptable licenses is scanned. IQ Server’s policies thus allow for the automation of dependency approval, and diminish the need for developers to remember which licenses are to be avoided.

Security

As important as licensing is for the success of a product, the success of a product also depends on minimizing the security vulnerabilities contained within it. Security vulnerabilities could, among others, lead to access to sensitive data, user impersonation, or denial of service, which could have detrimental effects.

Prior to using IQ Server, we would spend time looking up Common Vulnerabilities and Exposures (CVE’s) for dependencies included in our product by hand. However, with IQ Server the process has become much simpler. IQ server collates research from several sources including their own research team, providing detailed vulnerability information and suggested upgrades to avoid the vulnerability.

Not only does the application give information on confirmed CVE’s for the current version of the component, it also shows a record of CVEs for other versions of the component, allowing you to pick a version with less vulnerabilities. As with licensing, policies can also be set up for security vulnerabilities, notifying developers immediately when a dependency with a high severity security vulnerability is introduced.

IQ Server provides an elegant solution to keeping track of dependencies along with their licenses and security vulnerabilities, but together with Tasktop Integration Hub it can provide instant notifications on changes in the license status of dependencies or updates in policies.

Currently, IQ Server provides web hook functionality for four event types:

  • Policy Management Event
  • Application Evaluation Event
  • Security Vulnerability Override Management Event
  • License Override Management Event

When events such as updating of a policy, completion of an evaluation, or changes in the license status of a dependency occur, using Tasktop Integration Hub, they can be easily converted into JIRA tasks or passed on to any other connectors serviced by Tasktop.

For further information on how Tasktop can integrate with your open source tools to improve security and license risk management, visit our website and chat to one of our informative members of team.