Content Gap Analysis

Tiles

Unique Unsuccessful Searches

The number of search queries for which either nothing was found or no result was clicked in at least one search.

EXAMPLE.

“Threads” is searched 10 times. Three times the results were found and a result was clicked. Four times the result was found but not clicked. Three times no result was found (because of filter use) and no click was made. In this scenario, Searches with No Click = 7, Searches with No Result = 3, and Unsuccessful Searches = 7.

Searches with No Clicks

The sum of searches in which a user does not click on any of the results.

Searches with no results

Searches for which no result is produced. The Searches with No Result report carries context now. Instead of simply listing the keywords for which no documents were found, the report will provide details, such as whether any filters were in use and what search operators were employed. Based on this information, admins can select the best course of action to plug content gaps: Create new articles or educate users about advanced search.

A search can be no-result for multiple reasons:

  • Documents with the search keywords do not exist in your data repositories.
  • The user does not have the permission to see documents.
  • Your search client is not connected with the content source where the documents are stored.

Session with Unsuccessful Searches

This counts the number of sessions where at least one unsuccessful search was recorded. Both Searches with No Result and Searches with No Click are Unsuccessful Searches.

Daily Average Unsuccessful Searches

It is the average of Unsuccessful Searches over the selected Date Range.

EXAMPLE.

If the Date Range is from 2020-01-10 to 2020-02-09, then

Daily Average Searches with No Click

It is the average of Searches with No Click over the selected Date Range.

Example.

If the Date Range is from 2020-01-10 to 2020-02-09, then

Daily Average Searches with No Result

It is the average of Searches with No Result over the selected Date Range.

Example.

If the Date Range is from 2020-01-10 to 2020-02-09, then

Daily Average Sessions with Unsuccessful Searches

It is the average of Sessions with Unsuccessful Searches over the selected Date Range.

Example.

If the Date Range is from 2020-01-10 to 2020-02-09, then

Unsuccessful Searches

The graph illustrates trends in searches made in the selected period with number of searches with no results and searches with no clicks. This report helps Admins to perform gap analysis and track the keywords which didn’t generate any search result on weekly basis.

Sessions with Unsuccessful Searches

This graph helps Admins to compare the numbers of unsuccessful sessions on weekly basis. The longer the bar, the higher is the number of unsuccessful sessions.

Searches with No Click

Searches with no click. Search queries where results were shown up on the search client, but users haven’t probably found the results relevant. Hence result was not clicked for these queries.

In such cases, it is advisable to look back at the documents/searched results produced for these searched queries and generate more content around the same. This would significantly improve the search experience of the users.

It also captures Successive Searches with clicks which helps to understand the user journey. These successive searches could be made by the users as an alternative to ‘searches with no clicks’. In the latter searches, user could have found a useful document and Admin might want to boost the same document for the unsuccessful search query.

Searches with no Results

Searches for which results are not shown. The Searches with No Result report carries context now. Instead of simply listing the keywords for which no documents were found, the report will provide details, such as whether any filters were in use and what search operators were employed. Based on this information, admins can select the best course of action to plug content gaps: Create new article or educate users about advanced search.

A search can be no-result for multiple reasons:

  • Documents with the search keywords do not exist in your data repositories.
  • The user does not have the permission to see documents.
  • Your search client is not connected with the content source where the documents are stored.

Further to assist content teams, this report captures Successive Searches with clicks which helps to understand the user journey. These successive searches could be made by the users as an alternative to unsuccessful search queries (no results produced). In the latter searches, user could have found a useful document and Admin might want to boost the same document for the unsuccessful search query.

For example: 13 users have searched for the keyword ‘data’ 80 times in their 17 sessions, but no results were produced. On the right, 5 searches were made which gave relevant results that users end up clicking on the search results.

Top Converted Results Which Missed Top 10 Rank

This report uncovers the in-demand content buried deep in search results. It captures the number of times documents were clicked but these documents were not present on the first page (top 10 results).

The Admin can look back into the user journey for these sessions and find the keywords which were searched for these documents and apply boosting to them. The next time users search for the same keywords, these clicked documents will be shown up on the first page.

Average Time on Documents

This report captures the average time spent on the webpages/documents by the users across sessions. Average time too less suggests that the document is not found helpful by most of the users. Average time too high suggests that the length of the document is very large, or users might have left the article open and the session has been left idle.

Documents on the search results pages are clicked because users have found the title or summary of the document helpful. So, it becomes imperative to track how much time is spent on such documents. If the average time suggests for high bounce rate or idle time on clicked documents, then it is advisable to look back at those documents and make suitable changes according to the user needs.

KCS Articles

For companies moving to the Knowledge-Centered Support or Knowledge-Centered Service model of content generation, encouraging support teams to double as writers is an uphill battle. KCS Support Console eases the transition. Instead of a white screen, it displays articles which already have several fields—such as date, category, and in some scenarios the entire article body—filled. Support reps can tweak the template and save it as a draft.

So far, so good. Except that some agents adapt quickly and start publishing drafts almost immediately, while others need time to break the ice. The KCS Article Report helps you identify the quick adapters and the ones who generate the most articles.

The report has two columns: Case Subject and Count.

  • Case Subject lists the cases on which the articles have been created. You can identify a case by its subject.
  • Count captures the number of articles created on a particular case. It tends to be 1, but there is a possibility to have multiple articles for one case.

Click on any row in Count to view the title of the article created, the email address of the support team member who created the article, and the day when the article was created. This information is captured in the columns Article Title, Author, and Date respectively.

KCS Report

KCS Report monitors metrics that can be used to quantify the impact of KCS on your support team’s efficiency. The impact, measured in terms of article shares, can be converted into hours of effort and dollars saved.

Here’s a hypothetical scenario. An agent spends, on average, an hour to solve a repeat ticket. Now, if the agent, a technical writer, and an approver spend 8 hours collectively to transform a ticket into a published article and over the course of a year following its creation, the article is shared 20 times, then the support team can save 12 hours because instead of repeating a solution on phone or typing it in a mail, the agent can simply paste the link to the published article in response to the repeat ticket.

The savings, both in dollars and in time, add up quickly when multiple agents and hundreds of articles are involved.

Going over the report at regular intervals, you can also get a handle on who among your agents is most active in drafting the most shared articles.

Fig. 1

The report is sorted by article IDs which are to be found in the first column, Article (#). The octothorpe represents the total number of articles published. In Fig. 1, four articles have been published with the IDs 000001004, 000001005, 000001006, and 000001007.

Article Title serves two functions: A quick survey of this column provides a reader a gut feeling about the kind of articles being created and a search function (Fig. 2) helps a reader quickly pull up any article and see how many times it has been shared.

Fig. 2

In the Author Email Id field are captured the email addresses of the support agents who are drafted in the articles. In Fig. 1, you can spot that both the articles are drafted by sumit.aggarwal@gmail.com.

Created Date displays the date of creation of the article.

Fig. 3

If measuring productivity is your goal then Total Shares is the column you should keep an eye on. It displays the share count of each article. The number field in each row is clickable and can be utilized to further find out:

  • ONLY FOR SALESFORCE CLASSIC. What cases the article was attached to (Case Title; column 1 in Fig. 3) Weather the article emailed (Shared via Email; column 2 in Fig. 3), posted as a comment on a case page (Shared via Case Comment; column 3 in Fig. 3), attached to a case (Attached to Case; column 5 in Fig. 3).
  • ONLY FOR SALESFORCE LIGHTNING. The fourth column, Copied to Clipboard, captures the events when a link to an article is copied, which happens when any of the three sharing options is exercised: email, case comment, or case attachment. It’s sufficient to select an option, the actual sharing doesn’t necessarily have to occur.

Last updatedTuesday, June 23, 2020