• Background Image

    The Conversation

    social listening

May 2, 2017

Conversation Research Requires a Coding Plan

We kicked off a client project today at CRI and spent a fair amount of time deciding on our coding plan. This is the approach we’ll use when reviewing the conversations found using social listening platforms to code the conversations.

What do you visualize when you hear that we “code the conversations?” For some, it means reviewing each individual post and slapping a topic on it. For others, it might mean double-checking data like sentiment or gender. For us, however, coding a conversation requires an understanding of the client’s product, services, competitors, marketplace and more; preliminarily reviewing a sample of conversations to understand what types of posts we’re going to be reviewing; anticipating what people might say about the brand or topic at hand and more.

Our Coding Plan gives us the list of unanswered questions about the data. Social listening platforms can find things like sentiment, gender, location and even parse out a topic or theme, but not to a level of certainty we are comfortable with, nor at a level we would recommend you make marketing decisions upon. So yes, we review the data collected for accuracy.

But our Coding Plan also determines what else we want to know about the data the social listening platforms do not provide. Take a look at this snippet from a spreadsheet we recently coded:

You see entries for Sentiment, Account Type and Gender, all of which can be automatically detected. Our first step is to verify the automatic detection worked. Frankly, sentiment accuracy in social listening tools is incredibly disappointing. Our estimation is that only about 10-20% of posts are even scored at all. About 30-50% of them could be. Gender is only detected about 20-30% of the time, but can be determined manually for about 60-80% of the posts, so there’s some heavy lifting to be done.

The rest of the scoring columns you see are categories of information we determined would be insight-fertile categories for the client in question. We read each post to understand the context of the mention. Was it a promotion from a re-seller or a recommendation from a customer? Did they use any emotions in their description of the product or service (not did a word that describes an emotion appear in the text, which is what social listening platforms present)? What feature of the product did they mention specifically? Was there a specific issue or topic about that feature that stood out? What was the use case of the product or service (in this case, what type of location)? And were there other use cases that emerged than what the product or service was primarily sold for?

Planning your coding means anticipating where you’ll find the most useful answers in your research. It’s the social media analysis equivalent of crafting the right questions in traditional market research. We like to think we’re pretty good at that part. Hopefully, this helps you get good at it, too.

If you’d like to see what conversation research can reveal about your business, customers, competitors and marketplace, drop us a line. We’d be happy to discuss it with you.

March 23, 2017

The Truth About Cognitive Technology for Social Listening

On Wednesday, I presented a talk at IBM Amplify in which I explained the need for human analysis in social listening which produces what I call conversation research. I’ll admit the opportunity was intimidating. My task was essentially to look IBM executives, developers and users in the eye and say the social listening tool fueled by the famous Watson cognitive learning software was not very good.

But it’s not just Watson’s attempt at social listening that has issues. It’s all of them. Three of our most recent projects at the Conversation Research Institute tell a disappointing story. When we program these listening platforms to go find relevant conversations, we should see a respectable amount of just that in return. We don’t.

Social Listening Problems - Tim Moran photoFor our Dirt Devil project, we only scored 8.9% of the total posts our social listening software returned as relevant. It was worse for our industry report on the senior living space with only six percent of the results being the voice of the consumer. A brand study we did for a major healthcare company returned just 7.2% relevant results.

And we’re talking about three different social listening platforms. No one we’ve tested scores any better than these numbers.

What this means is without human analysis, scoring and curating of your social listening data — which is time and resource intensive — you’re paying for a lot of crap. And the technology is only getting incrementally better. It’s not growing by leaps and bounds the way the sales people tell you. Even Watson and IBM’s powerful engines have trouble weeding through and deciphering unstructured data like social conversations.

The truth is that when the data is unstructured, inconsistent and unpredictable, cognitive technology can only do so much. At least so far.

In the hopefully not-too-distant future we’ll be able to say, “Watson, find relevant consumer conversations about Dirt Devil vacuums and tell me the themes that surface around product problems,” then see meaningful results in seconds. But that day is farther off than you think.

In the meantime, CRI can help. Let us know how we might help you separate the signal from the noise and deliver consumer insights that help drive smart marketing decisions for your brand.

NOTE: Photo by Tim Moran, one of my fellow IBM Futurists.

January 24, 2017

What Social Listening Tools Don’t Tell You (That Conversation Research Does)

If there is one core reason the Conversation Research Institute exists it is that social listening tools only collect and represent data. They don’t analyze it. Try all you might, but you will never get an algorithm to produce 100% reliable insights from counting things. It’s the human processing of the counted things that results in usefulness of that data.

Case in point: The topic analysis of social listening tools. What this feature does in most softwares designed to “listen” to online conversations is count keywords. Topic analysis are often presented in word clouds. We prefer to look at them in terms of pie charts so there’s a more natural understanding of the volume of that particular topic in relation to the whole.

Here’s an analysis of specific “Dislikes” around Kentucky Fried Chicken I conducted in 2012. This is very much like the topics chart that a social listening platform would produce. You can see that 30% of the negative conversations mention “chicken,” eight percent mention “chip,” and so-on. (Note: Because this was produced from an automated topic analysis, the keywords it counted and collected are raw and what was present online in the conversation at that point in time.)

Topic Analysis Example - KFC

But looking at this you only know that these keywords were present or popular in those conversations. You don’t know the critical, insight-producing answer, which is to the question, “Why?”

When you perform conversation research, even if you do it using automated tools, you dig a layer or two down to uncover that answers. So here’s a look at Olive Garden’s negative themes from that same research in 2012. We broke out the negative theme of “Taste” to show that the qualifiers … leading to the answer of “Why?” … include Nasty, Disgusting and Like Shit. There’s also Bland, Gross, Like Microwaved Food and Weird.

Topic Analysis - Olive Garden

So we can now apply more specific context around why people who didn’t like the taste of Olive Garden. Drilling down yet another level, to analyze the mentions of “Nasty” or “Disgusting” to see if there are specific menu items, specific reasons or perhaps even specific locations where those qualifiers emerged, we may uncover insights that inform Olive Garden’s business.

The point here is certainly not to pick on KFC or Olive Garden. These charts were produced in 2012 using automatic theme analysis. Chances are, the results today would be very different. But the automatic theme analysis is the key point to consider. At Conversation Research Institute, we insist on human analysis to break down the “Why” and offer more concrete insights to help your brand.

While a few researchers can’t possibly analyze hundreds of thousands of conversations manually, our process is a two-step one for larger conversation sets. We first isolate posts we consider to be the voice of the consumer. That definition changes slightly depending on the client and project at hand. Once we have filtered out posts that do not fit that definition, if necessary, we sample at rates much higher than traditional research sampling standards.

The bottom line is this: If you are relying on machines to spit out insights, you are being tricked into thinking charts and graphs are insights. There’s a big difference in counting (to get the what) and analyzing (to get the why).

Let us help uncover more about the voice of your consumer. Drop us a line today for an introductory discussion.

January 10, 2017

Identifying the Buyer Journey through Conversation Research

The first Conversation Report is due out any day now. Our dive into understanding the buyer journey for the senior care space, which includes nursing homes, assisted living, independent living and more, is coming in at around 15,000 words, over 75 charts and graphs and dozens of insights we’ve synthesized from the data that help senior care brands understand their customers better.

In conducting the research, we had a peculiar challenge. Our goal was to not only find only the posts produced by true customers, but those actively considering senior care options for themselves or a loved one. How do you isolate not just the consumer, but one that is actively looking without knowing first who they are or where they are — both answers you get from the research?

It seems a Holmesian Catch-22.

But it’s not.

All of our broad level research begins by trying to understand the consumer’s conversation habits first. We seek and discover individuals who have been, or are going through, the buying process and interview them. But we don’t necessarily ask all the questions we hope to answer with the research. Instead, we focus on how they go about discovering information about the product or service at hand. We uncover how they talk and think about the topic in terms of lexicon and verbiage. We try and get at what they might say in an online conversation should they resort to social media and online communities to ask questions about the topic at hand.

By canvasing a small focus group on how people talk about buying or shopping for the product in question, we can then produce more accurate search variables to uncover similar conversations on the web. Consider it our social media version of Google’s Keyword Tool. While search terms also contribute to our pool of knowledge and understanding about the audience, people may search for “senior care” but they don’t use that term in a sentence when chatting about the search for a solution online.

As you approach conversation research, you should consider that your assumptions about your audience and how they discuss certain topics in online conversations are biases. You need to vet them properly to get to a more accurate read on what is being said. One misstep in the search variable construction and you could eliminate thousands of relevant conversations. Or, perhaps worse, you could create thousands more to weed through that aren’t relevant at all.

This reinforces something you’ll hear us say over and over at CRI: Social listening software isn’t enough. You have to add the human element to your data gathering mechanism to make sense of all this noise.

How do you go about constructing your searches? We’d love to hear your thoughts and processes in the comments.

November 21, 2016

An Example of Why Social Listening Needs Conversation Analysis

A key value proposition for Conversation Research Institute is that we offer conversation analysis to social listening data that finds the insights that can help drive your business decisions. But that’s not just a fancy set of words, there’s real reason behind it.

First, know that what we mean by offering analysis is that social listening tools themselves aren’t enough. Pretty charts and graphs and word clouds don’t do your business any good if you can’t explain what they mean, how the data was discovered and what insights surfaced that can help you.

Conversation analysis

No social listening software does that for you. You have to have conversation analysis – from a human being – to understand the data and surface that information manually.

Case in point, while working on a research project for an upcoming Conversation Report, we found this entry in a sea of data about the elderly care space:

“The social worker at the nursing home ~ when mom first went there ~ had to go to bat for mom and went to court to get a guardian (not my brother) for mom.”

The software in question gave this entry a neutral sentiment and picked out no sub-topics or themes for the entry. The software surfaced “social worker” “nursing home” and “guardian” as word cloud entries, but again, did not attach any sentiment or context to them.

Because we are manually scoring and analyzing this data, and our perspective is to look at the voice of the consumer as it relates to the elderly care providers (nursing homes, assisted living facilities, independent living communities and other long-term care providers), we add lots more context to the analysis:

  • The sentiment is negative toward the nursing home because the patient needed an advocate
  • The sentiment is positive toward the social worker who served as advocate
  • The source is a family member
  • The theme is patient advocacy
  • A sub-theme is non-family guardianship

And that’s before we went to the original post (which has other excerpts appearing in the research) to clarify even more:

  • The brother in question filed for guardianship after lying for years about having the mother’s power of attorney
  • The social worker was advocating for the patient, but also the rest of the family
  • The author (a daughter of the patient) was considering hiring a lawyer to fight the brother’s claim for guardianship.

So family in-fighting over the burden of care, cost and decision was another important theme.

When you let a computer spit out analysis of tens of thousands, or even millions, of conversations you get roughly one tenth of the context and actual insight possible from truly understanding what is being said. Certainly, on scale there’s no way to be as thorough.

But relying on automatic charts and graphs is keeping you away from the one thing you’re looking for: True consumer insight.

That’s what we surface. If you’re interested in finding it for your brand, let us know.

 

November 1, 2016

Can Conversation Research tell you why sales are down?

 

A large national retailer in the food and beverage industry was riding high last year. Sales were up, the brand was healthy, consumers were immersed in the experience. Years of hard work had put the brand on the top of the heap in their category.

But then they noticed that sales of certain beverages had started flat lining. They couldn’t quite figure out why. Nothing in their formulas had changed. Customers weren’t indicating why they were switching drinks or passing on the drinks when they ordered. What was the brand to do?

They turned to online conversations and posed the question, “Are sales for these drinks flat lining because of a consumer shift or something else?” Consumers would likely tip their hand if it was the former. If the research was inconclusive, it wasn’t likely because of a consumer need, but something else.

The conversation research for the brand turned over an insight that explained it. The brand’s customers were becoming increasingly concerned about the sugar content of the drinks in question. They were interested in more healthy options.

So the brand formulated a new line of fruit-based, all-natural drinks just in time for spring.

The sugary drink sales stayed flat while the new line took off, exceeding expectations and satisfying customers.

So yes. Conversation research can tell you why sales are down. It may also tell you how to make them go the other direction.

Call us to see how conversation research can help your brand.

October 10, 2016

Diet soda buzz is flat, but so are listening standards

As I write this, I’m on day nine without drinking diet soda. This coming from someone who has probably averaged 6-12 cans of soft drink per day since childhood. And no, I’m not exaggerating.

The caffeine withdrawal headaches are gone, but I still don’t like drinking water all the time, though I do feel a bit lighter and healthier, which was the point.

While I jokingly said when I started this process that the sales and marketing teams at Diet Pepsi were in for a rough fall wondering why their Louisville, Ky., volume just disappeared, it seems I may be the least of their concerns.

Engagement Labs released a report last week of soft drink brands that shows a surprising decline in online and offline conversations about diet sodas. Their report claims consumer’s passion for diet soda has “gone flat” but that people are still talking about their love for sugared soft drinks more than ever.

Engagement Labs combines online conversation research, like that I am a part of at the Conversation Research Institute, with person-to-person discussions in focus group form. They combine those two scores into what they call a “TotalSocial” tool and present a baseline score to compare brands.

While all the details of how the score is formulated are certainly proprietary, if you assume all are scored on the same measurement system, the results are intriguing.

 

Coca-Cola is the standard bearer of the soda world, as you would expect, scoring a 50 on the TotalSocial scale. The industry average is around 40. Diet Mountain Dew and Diet Coke are the only two low-calorie options that hit that 40 mark, the rest are below. Diet Pepsi (30), Coke Zero (31) and Diet Dr. Pepper (36) are at or near the bottom of the list.

The main concerns or topics Engagement Labs points to as reasons? Health concerns about sugar and artificial sweeteners, the push for natural ingredients and backlash to recent formula changes by some brands. Engagement Labs offers the opinion that soda brands need to find ways to drive positive consumer engagement for their diet soft drinks the way many do for their sugary brethren.

Of course, Engagement Labs is a marketing company with what looks like a subscription-based measurement tool trying to hook a brand or two as a client, too.

When I see data like this, I’m certainly interested. Looking at how one company, agency or tool ranks and qualifies social media data is always interesting. My skeptic brain kicks in and tries to punch holes in the methodology.

While I don’t know a lot about Engagement Labs’s approach (maybe they’ll chime in and enlighten us in the comments), my skepticism tells me they’re likely using some mass social media scan using a listening platform without appropriate disambiguation. But that’s balanced by the fact that claim to also offer focus group-esque person-to-person interviews. And those require some work and often offer much more valid responses as the questions can be directed.

We don’t really have an industry standard for analyzing and understanding online conversations.

What these reports and surveys typically lead me to, however, is that we don’t really have an industry standard for analyzing and understanding online conversations. Each tool brings in its own volume of online conversations and the volumes never match. NetBase might show 380,000 mentions of a brand while Crimson Hexagon shows 450,000, Brandwatch 145,000 and Radian6 something completely different.

This is why CRI takes a tool agnostic approach. We’d rather assume that sampling enough from each and pulling together an aggregate survey of the online conversation space gives us meaningful data. At least more meaningful than what any one tool offers.

And certainly one that I can defend to clients who won’t then drive me to drink (Diet Pepsi) again.

For more on how the Conversation Research Institute can help you uncover insights about your customers or brand, give us a call or drop us a line.

October 4, 2016

The Achille’s Heel of Social Listening Software

If you use social listening software there’s a good chance you share a frustration with thousands just like you: You can never get the right data. Disambiguating online conversation searches is part Boolean Logic mastery, part linguistics and part voodoo. Or so it seems.

Disambiguation refers to weeding out all the data that comes back in your search query that isn’t relevant. It is a fundamental skill in the practice of conversation research. Type in the brand name “Square” for instance, and you’re going to have a hard time finding anything that talks about Square, the credit card processing app and hardware. Instead, you’ll find a sea of mentions of the word “square” including stories about Times Square, the square root of things and 1950s parents their children referred to as squares.

Disambiguation is a big problem for social listening platforms, yet most of them completely ignore the end user’s need for help. Some have build Boolean logic helpers in their software. Sysomos and Netbase have nice ones. But the only marketing professionals (who this type of software is targeted for) who understand Boolean logic switched majors in college.

What happens when someone who isn’t fluid in Boolean logic searches for conversation topics? You get a lot of results you aren’t interested in. And sadly, most end users of these software platforms don’t know any better. They see results, know they can output a couple charts or graphs for the monthly report and they’re done.

But the results they’re looking at are still littered with irrelevant posts. You can tweak your Boolean string all you want, but you’re likely to come up with something that looks right, but isn’t. And we haven’t even gotten to the Achille’s Heel yet!?!

Case in point: I did a recent brand search for a major consumer company last week. This was a simple brand benchmarking project where I was simply trying to identify all the conversations online that mentioned the brand, then decipher what major topics or themes emerged in those conversations.

My first return from the software was 21,000 conversations. As a reviewed them, I realized there was a lot of spam. After three hours of Boolean revisions, I narrowed the automatic results list to 1,654 conversations. But guess what? While they all were valid mentions of the brand, many of them were job board postings, stock analysis and retweets of news items mentioning the brand. None of these categories — which will likely show up in the automated searches for any sizable brand — are relevant to what the client asked of me: What are the topics of conversation when people talk about us?

So I manually scored the 1,654 conversations, creating categories and sub-categories myself. I also manually scored sentiment for any that made it to the “relevant” list. Here’s what I found:

  • 339 relevant conversations (* — Achille’s Heel coming)
  • 50% were negative; 32% positive and 18% were neutral (compared to the automated read of 92% neutral, 5% positive and 3% negative)

And here’s the Achille’s Heel: (Some topics redacted for client anonymity)

 

Despite manual scoring and categorizing, the majority of results I found were in a category I called “News Reaction.” These were almost all re-tweets of people reacting to a news article, which were removed in my automatic disambiguation process. The client doesn’t care about the news article (for this exercise) but for what consumers are saying.

The Achille’s Hell of Social Listening platforms is they generally do not automatically disambiguate your data well and even when you manually score it, there are reactions and by-products of original posts included that you don’t care about. (There are probably also ones not included that you do, but my guess is those are of less concern if your search terms are set well.)

This is the primary reason conversation research cannot be left to machines alone. For the platforms by themselves will make you believe something that isn’t actually true.

For more on how conversation research can help your brand or agency, give us a call or drop us a line.

 

 

Here’s where deep conversation research comes in. This is a topic chart for a major consumer company’s online conversations for a three month span

September 27, 2016

Why small samples matter in Conversation Research

 

Conversation research is distinct from traditional market research in that it is largely unstructured. We use a variety of softwares and tools to process the data sets to produce some degree of organization – topics, sources, themes, etc. – but you’re not pulling a sample of 1000 people of a certain demographic and asking them the same questions here. You’re casting a wide net looking for similarities in random conversations from around the world.

So when your review comes back with 100 conversations out of 23,000, it’s easy to dismiss this percentage (less than 0.02) as not valid. But let’s look at an example and see if validity needs to be reconsidered.

CRI recently conducted a high-level scan of the online conversations around work-life balance with our friends at Workfront. The project management software company focuses a lot of its content on work-life balance as its solution helps bring that result to marketing agencies and brand teams around the world.

Over the 30-day period ending September 19, we found 23,021 total conversations on blogs, social networks, news sites, forums and more – essentially any publicly available online source where people could post or comment – about work-life balance.

If you focus on the 23,021 as your total pool of conversations, it might frustrate you that only eight percent (1,827) could be automatically scored for sentiment. (One can manually score much more, and CRI typically does a fair amount of work to close that gap, but it is an exercise in time and resources that for this project both parties elected to set aside.)

But if you take that eight percent – those 1,827 conversations – and now consider them your sample set, you’ve got something. There, we discover that 79 percent of the scored conversations were positive – people are generally in favor of or have good reactions to the concept of work-life balance. But that means 21 perent of them don’t.

And this is where our curiosity is piqued.

 

It turns out the predominant driver around the negative conversations on work-life balance is that the concept itself is a myth. Out of the 382 total negatively scored conversations found, 98 of them indicated in some way that work-life balance was a lie, a farce, an illusion and so on.

Another 10 were tied to a conversation around a piece of content exposing the “lies” of work-life balance, also indicating there’s some level of mistrust that it is attainable. And 10 more revolved around a reference to work-life balance being overrated.

So while the negatively scored conversations were just 0.02% of the total conversation set, they were 21% of the total subset that could be scored for sentiment. And of that subset, more than one quarter were focused on the concept not being real at all.

This is where deeper analysis can help us synthesize true insight. Why do people think it’s a myth? Is it that the naysayers are likely cynics who cannot draw hard lines between their work time and focus and that which they spend away from work? Or do the demands of most jobs actually make it impossible to separate work from life? Or is it something else?

The bottom line is that one shouldn’t be dismissive of small data sets from big data, especially when it comes to conversation research. Remember that while we may only be talking about 100 conversations out of 23,000, but those 100 conversations are from people who are proactively discussing the topic at hand, not people being led down a Q&A path by an interviewer or a survey.

This brings delightful structure to that unstructured world.

September 23, 2016

What is conversation research?

Any research effort begins with the quest to define the problem. I suppose then any research business should start the same way. What exactly is Conversation Research and what problem does it attempt to solve?

Conversation Research is, simply, researching online conversations – those found in social media, or any other online mechanism that enables user-to-user discussion – with the purpose of discovering insight. We must keep the definition broad to allow inclusion of many varieties of sources, discussions, insights and purposes.

The Conversations Research Institute, for the record, focuses primarily on insights that drive business and marketing decisions. But our scope won’t always be limited there, either.

But aren’t we just saying “social monitoring” or “social listening” using synonyms? Not exactly. For me, social monitoring has always been a very reactive practice – one that is most commonly associated with customer service and reputation management. Wait until we see what people say before we do anything with it.

Social listening, on the other hand, has been more of the proactive practice. Let’s go look for mentions of something specific in order to learn or direct our future activities.

Software companies and consultants interchange both, though they are very different in intent. And both have been further lumped into the larger tab of “social analytics.” But this can include things like follower count, conversion rates and the like that a researcher mining for insights may or may not have interest in.

So Conversation Research is a different practice. It is analyzing the existing data around conversations among an audience segment. That segment could be a demographic, psychographic or set that contains some commonality, like all having mentioned a particular phrase or word.

The intention of Conversation Research is to deliver insight about the audience having the conversation. What do they say? How do they feel? What is their intention?

Knowing this information unlocks a third characteristic of a research audience. Instead of demographic or psychographic, it represents the social-graphic characteristics of an audience: What do they talk about in online conversations? What content do they read and share? What audiences to they influences? What influencers influence them?

All of these qualities of a given audience or audience member can unlock previously before unknown data about the customer. It can open doorways to new paths to engagement and conversion. It is market research done with online conversations as the focus group – the largest focus group ever assembled, mind you. And it has the potential to revolutionize the way we get to know our customers and prospects.

While Conversation Research is not intended to, nor should it, replace traditional market research. There are some interesting parameters to help consider leveraging this approach as a supplement to and in some cases instead of, traditional focus groups or surveys:

  • Conversations online are seen my far more people than hear them offline.
  • Conversations online are not led or framed by a questioner. You are mining real, voluntary, organic assertions from consumers.
  • Conversations online are not a snapshot in time but can be analyzed in real-time or as a trend over time.
  • While traditional research can offer more efficient sampling in terms of demographics, representative to national statistics, etc., conversation research can return hundreds of thousands of participants rather than samples of a few hundred people.

My colleagues and I have been mining online conversations for several years now. I was proud to publish what we believe to be the first-ever industry report based solely on online conversations in 2012. But now we are defining Conversation Research with a renewed focus and vigor.

Mining online conversations for insights from consumers is the next big trend in brands using social media. Conversation Research is here. The only question is how quickly will you reap the benefits?

For more on how The Conversation Research Institute can help, drop us a line or visit us at http://www.conversationresearchinstitute.com.

 

Interested in learning more?
Subscribe to our free newsletter and get a free style advice every week. We will also notify You about new offers and discounts. Check it!