January 24, 2017

What Social Listening Tools Don’t Tell You (That Conversation Research Does)

If there is one core reason the Conversation Research Institute exists it is that social listening tools only collect and represent data. They don’t analyze it. Try all you might, but you will never get an algorithm to produce 100% reliable insights from counting things. It’s the human processing of the counted things that results in usefulness of that data.

Case in point: The topic analysis of social listening tools. What this feature does in most softwares designed to “listen” to online conversations is count keywords. Topic analysis are often presented in word clouds. We prefer to look at them in terms of pie charts so there’s a more natural understanding of the volume of that particular topic in relation to the whole.

Here’s an analysis of specific “Dislikes” around Kentucky Fried Chicken I conducted in 2012. This is very much like the topics chart that a social listening platform would produce. You can see that 30% of the negative conversations mention “chicken,” eight percent mention “chip,” and so-on. (Note: Because this was produced from an automated topic analysis, the keywords it counted and collected are raw and what was present online in the conversation at that point in time.)

Topic Analysis Example - KFC

But looking at this you only know that these keywords were present or popular in those conversations. You don’t know the critical, insight-producing answer, which is to the question, “Why?”

When you perform conversation research, even if you do it using automated tools, you dig a layer or two down to uncover that answers. So here’s a look at Olive Garden’s negative themes from that same research in 2012. We broke out the negative theme of “Taste” to show that the qualifiers … leading to the answer of “Why?” … include Nasty, Disgusting and Like Shit. There’s also Bland, Gross, Like Microwaved Food and Weird.

Topic Analysis - Olive Garden

So we can now apply more specific context around why people who didn’t like the taste of Olive Garden. Drilling down yet another level, to analyze the mentions of “Nasty” or “Disgusting” to see if there are specific menu items, specific reasons or perhaps even specific locations where those qualifiers emerged, we may uncover insights that inform Olive Garden’s business.

The point here is certainly not to pick on KFC or Olive Garden. These charts were produced in 2012 using automatic theme analysis. Chances are, the results today would be very different. But the automatic theme analysis is the key point to consider. At Conversation Research Institute, we insist on human analysis to break down the “Why” and offer more concrete insights to help your brand.

While a few researchers can’t possibly analyze hundreds of thousands of conversations manually, our process is a two-step one for larger conversation sets. We first isolate posts we consider to be the voice of the consumer. That definition changes slightly depending on the client and project at hand. Once we have filtered out posts that do not fit that definition, if necessary, we sample at rates much higher than traditional research sampling standards.

The bottom line is this: If you are relying on machines to spit out insights, you are being tricked into thinking charts and graphs are insights. There’s a big difference in counting (to get the what) and analyzing (to get the why).

Let us help uncover more about the voice of your consumer. Drop us a line today for an introductory discussion.

0 Comments

Leave A Comment

Leave a Reply

Interested in learning more?
Subscribe to our free newsletter and get a free style advice every week. We will also notify You about new offers and discounts. Check it!