May 2, 2017
We kicked off a client project today at CRI and spent a fair amount of time deciding on our coding plan. This is the approach we’ll use when reviewing the conversations found using social listening platforms to code the conversations.
What do you visualize when you hear that we “code the conversations?” For some, it means reviewing each individual post and slapping a topic on it. For others, it might mean double-checking data like sentiment or gender. For us, however, coding a conversation requires an understanding of the client’s product, services, competitors, marketplace and more; preliminarily reviewing a sample of conversations to understand what types of posts we’re going to be reviewing; anticipating what people might say about the brand or topic at hand and more.
Our Coding Plan gives us the list of unanswered questions about the data. Social listening platforms can find things like sentiment, gender, location and even parse out a topic or theme, but not to a level of certainty we are comfortable with, nor at a level we would recommend you make marketing decisions upon. So yes, we review the data collected for accuracy.
But our Coding Plan also determines what else we want to know about the data the social listening platforms do not provide. Take a look at this snippet from a spreadsheet we recently coded:
You see entries for Sentiment, Account Type and Gender, all of which can be automatically detected. Our first step is to verify the automatic detection worked. Frankly, sentiment accuracy in social listening tools is incredibly disappointing. Our estimation is that only about 10-20% of posts are even scored at all. About 30-50% of them could be. Gender is only detected about 20-30% of the time, but can be determined manually for about 60-80% of the posts, so there’s some heavy lifting to be done.
The rest of the scoring columns you see are categories of information we determined would be insight-fertile categories for the client in question. We read each post to understand the context of the mention. Was it a promotion from a re-seller or a recommendation from a customer? Did they use any emotions in their description of the product or service (not did a word that describes an emotion appear in the text, which is what social listening platforms present)? What feature of the product did they mention specifically? Was there a specific issue or topic about that feature that stood out? What was the use case of the product or service (in this case, what type of location)? And were there other use cases that emerged than what the product or service was primarily sold for?
Planning your coding means anticipating where you’ll find the most useful answers in your research. It’s the social media analysis equivalent of crafting the right questions in traditional market research. We like to think we’re pretty good at that part. Hopefully, this helps you get good at it, too.
If you’d like to see what conversation research can reveal about your business, customers, competitors and marketplace, drop us a line. We’d be happy to discuss it with you.
November 21, 2016
A key value proposition for Conversation Research Institute is that we offer conversation analysis to social listening data that finds the insights that can help drive your business decisions. But that’s not just a fancy set of words, there’s real reason behind it.
First, know that what we mean by offering analysis is that social listening tools themselves aren’t enough. Pretty charts and graphs and word clouds don’t do your business any good if you can’t explain what they mean, how the data was discovered and what insights surfaced that can help you.
No social listening software does that for you. You have to have conversation analysis – from a human being – to understand the data and surface that information manually.
Case in point, while working on a research project for an upcoming Conversation Report, we found this entry in a sea of data about the elderly care space:
“The social worker at the nursing home ~ when mom first went there ~ had to go to bat for mom and went to court to get a guardian (not my brother) for mom.”
The software in question gave this entry a neutral sentiment and picked out no sub-topics or themes for the entry. The software surfaced “social worker” “nursing home” and “guardian” as word cloud entries, but again, did not attach any sentiment or context to them.
Because we are manually scoring and analyzing this data, and our perspective is to look at the voice of the consumer as it relates to the elderly care providers (nursing homes, assisted living facilities, independent living communities and other long-term care providers), we add lots more context to the analysis:
- The sentiment is negative toward the nursing home because the patient needed an advocate
- The sentiment is positive toward the social worker who served as advocate
- The source is a family member
- The theme is patient advocacy
- A sub-theme is non-family guardianship
And that’s before we went to the original post (which has other excerpts appearing in the research) to clarify even more:
- The brother in question filed for guardianship after lying for years about having the mother’s power of attorney
- The social worker was advocating for the patient, but also the rest of the family
- The author (a daughter of the patient) was considering hiring a lawyer to fight the brother’s claim for guardianship.
So family in-fighting over the burden of care, cost and decision was another important theme.
When you let a computer spit out analysis of tens of thousands, or even millions, of conversations you get roughly one tenth of the context and actual insight possible from truly understanding what is being said. Certainly, on scale there’s no way to be as thorough.
But relying on automatic charts and graphs is keeping you away from the one thing you’re looking for: True consumer insight.
That’s what we surface. If you’re interested in finding it for your brand, let us know.