Centering Data on a Technical Writing Team

Posted by

We make hundreds of decisions every day at our jobs. Surveys have found that at most companies, the majority of decisions are made based on gut feelings and experience, instead of data and information. As a technical writer, it can be difficult to know what data to use to guide decisions. As you document features, best practices, and getting started guides, you hope that you are making each user’s life easier. But how can you use quantifiable metrics to demonstrate that you have? And what data can you use to guide future decisions regarding your technical writing & knowledge management strategy?

I have previously written about how we bootstrapped a usability testing program to use data to guide key product UX decisions. Continuing the theme, let’s look at centering data to bolster gut decisions by talking through how we defined tangible KPIs to measure and assess the success of our technical writing program.

When we began working toward a data-centered philosophy for our technical writing team, our first step was to define tangible metrics to track success. In defining those metrics, it was essential that they correlate with our company’s high-level goals and track the progress in achieving those goals.

  • Make our support team’s lives easier: If our documentation (both customer-facing and internal) is robust, clearly written, and easy to find, it will cut down on time our support team spends answering customer questions. For starters, it should reduce the sheer number of questions that get in the door, as customers are able to self-serve and find the information they need. If a question does get through to support, there are a couple ways that good documentation can help make the support team more efficient: 
    • If public documentation exists to answer the customer’s question, they can simply send a link rather than reinventing the wheel each time the same question is received.
    • If internal documentation exists to answer the question, the support team can self-educate. Equipped with the necessary knowledge, they can respond to questions without meeting with peers or long goose chases through email history or shared file networks.
  • Make our customers happier: Our hope is that our easy-to-read, all-encompassing documentation will make our customers’ experience of Tasktop products smooth, easy, and joyful.  

Both of these goals can be difficult to quantify. Here’s how we measure them:

  • Track the number of support tickets per customer: We tracked the number of support tickets per customer over time to see if they trended downward as we built out our help center and internal resources. This can be a nuanced number, because increased support tickets can also correlate to increased customer engagement, increased usage of new features, etc. So we wanted to ensure we weren’t looking at this metric in isolation.

  • Track the number of support tickets associated with documentation requests: We have a process that allows anyone at Tasktop to submit a documentation request if they aren’t able to find the information they need in our help center. By tracking the percentage of support tickets associated with documentation requests each quarter, we were able to measure how likely it is that support is able to find the information they need in our existing resources. 
  • Track the number of tickets solved within our SLA: Tracking the time a support ticket is open is tricky. Sometimes a request can balloon into several requests. Sometimes a request is associated with a defect, which can be blocked by Engineering or Security.  Sometimes a support ticket is pending additional information from the customer. With all these variables, it can be difficult to assess them in bulk. Our goal with this metric was to track the number of support tickets that were easy to close — ideally because documentation already existed to solve the problem. Our hope was that as our help center and internal resource center matured, we’d see more and more tickets being closed quickly. We used a concrete number of hours open to define a “short ticket” so that the metric would be easy to track. Rather than measuring “average length a ticket was open” (which could balloon over weekends, holidays, or customer vacations), we simply defined a ticket as “short” (within our SLA) or “long” (exceeded our SLA) and tracked if our ratio of “short” tickets increased over time (it did!).
  • Measure support ticket categories: Our support team uses tags to categorize the types of questions they receive — covering themes like upgrades, user management, and error messages. Each quarter, we review the top trends, and then use those trends to drive future areas of focus for our team and to assess whether previous documentation efforts to address support trends have been successful.

  • Good ol’ pageviews: We also used a web analytics tool to track the number of pageviews for our user docs. We target pages outlining new product functionality in order to understand the usage of our documentation to learn about those new features.  In addition to page views, we also look at user sessions, bounce rates, and time on each page to get a fuller picture of user engagement.

And there’s more! As our technical writing team grows more sophisticated, we are excited to explore click path analytics tools to track anonymous behavioral trends across users such as:

  • What percentage of users clicked the help icon on a new page of our product?
  • What page are users frequently on right before they click the “Contact Support” link?  (And follow-up question — are there ways to interlay help resources on that page to decrease support clicks?)

These objectives have helped us measure our team’s success, but more importantly, they have given us a foundation of data to help guide our future decisions. 

Optimizing UX: How to set up your own Usability Testing Program in-house  

Leave a Reply

Your email address will not be published. Required fields are marked *