Dawn Stevens, Comtech Services
July 15, 2019
Ask any technical writer what is the measure of success for their work and they will tell you that their documentation must meet the needs of their users – it must give users the information required to answer their questions, solve their problems, and complete their tasks in an effective and efficient way. But ask them how they know if they succeeded, answers may vary from an embarrassed shrug all the way through a detailed process of data collection and analysis. CIDM’s June Roundtable sought to find best practices in collecting user feedback and determining the efficacy of web-based documentation.
Of the members who attended, sixty percent gather some sort of metrics to try to determine how well the pages are received:
- 58% monitor web metrics, such as the number of times a page is viewed, the average time spent on the page, the average time spent looking for a page, and the most popular and unpopular elements on the page (aka heatmaps).
- 50% gather direct user feedback, including page ratings (such as thumbs up/down or stars), online comments provided through a form of some kind, or even direct user testing.
- 25% track positive impact on customer support as demonstrated through a decreased call volume or an increase in the number of closed support tickets because users were able to find what they needed in the documentation.
- 17% receive ad hoc feedback, such as comments passed on from another context, such as remarks made to marketing or support or at an annual user group conference.
Regardless of how feedback is gathered, everyone agreed that it is important not just to gather it, but to do something with it; for example, correlate the comments into actionable requirements that are prioritized and executed the following year or open bugs against the documentation that are tracked until resolved. In addition, it’s equally important to circle back with users to let them know that their feedback has been received and what actions are being taken. Data confirms an increased “product stickiness” when customers feel that their feedback results in changes to the documentation.
Unfortunately, feedback tends to skew towards the negative. More customers leave feedback when they are unhappy or frustrated than when they find what they need quickly and effectively. Determining what one member called “content efficacy” remains elusive. Although one-on-one conversations might shed light, not every organization has the time or budget for such intensive engagement with users. Pragmatically, how can documentation organizations interpret the little feedback received to give customers a better user experience?
Participants decided that the effective use of customer feedback requires first that the organization defines criteria for correlating the data they receive. These criteria can then help interpret questions such as:
- Did the user spend a short period of time on a page because they quickly found the information they needed or did they just gave up?
- Conversely, did the user spend a long period of the time on the page because they were engaged with the information or they found it difficult to read and interpret?
- Did the user provide a poor ranking because the information was incomplete? Inaccurate? Difficult to understand? Hard to find?
For example, data correlation might include comparing the number of support calls received on a subject with the ratings and number of page views for the topic that covers that subject. In cases where the majority of data is simply a rating (the topic helped or it didn’t help), these ratings are used solely as an indication of what pages might need attention, but the team relies on comments to gather the specific changes that should be made.
Another important criterion to set is how much feedback is required to indicate whether content needs attention. Responding to isolated feedback can result in “one-off” documentation, where each and every edge case is documented, effectively distracting from the mainstream topics. One member explained that she uses the ratio of the number of comments received to the number of page views during the same period to determine content effectiveness. For example, if a page has 64,000 views in a single month and only 100 comments, she extrapolates that most people got what they needed from the content.
Many aspects of our June Round Table discussion touched upon topics included in the upcoming CIDM Best Practices conference. This year’s theme is “The measure of success” and presentations address such questions as :
- What are the metrics that define the success of a technical communication department?
- How can managers determine if their teams measure up to industry norms?
- What data should they collect?
- What outcomes are most important?
Hope to see you there!