• Background Image

    The Conversation

    conversation research

May 2, 2017

Conversation Research Requires a Coding Plan

We kicked off a client project today at CRI and spent a fair amount of time deciding on our coding plan. This is the approach we’ll use when reviewing the conversations found using social listening platforms to code the conversations.

What do you visualize when you hear that we “code the conversations?” For some, it means reviewing each individual post and slapping a topic on it. For others, it might mean double-checking data like sentiment or gender. For us, however, coding a conversation requires an understanding of the client’s product, services, competitors, marketplace and more; preliminarily reviewing a sample of conversations to understand what types of posts we’re going to be reviewing; anticipating what people might say about the brand or topic at hand and more.

Our Coding Plan gives us the list of unanswered questions about the data. Social listening platforms can find things like sentiment, gender, location and even parse out a topic or theme, but not to a level of certainty we are comfortable with, nor at a level we would recommend you make marketing decisions upon. So yes, we review the data collected for accuracy.

But our Coding Plan also determines what else we want to know about the data the social listening platforms do not provide. Take a look at this snippet from a spreadsheet we recently coded:

You see entries for Sentiment, Account Type and Gender, all of which can be automatically detected. Our first step is to verify the automatic detection worked. Frankly, sentiment accuracy in social listening tools is incredibly disappointing. Our estimation is that only about 10-20% of posts are even scored at all. About 30-50% of them could be. Gender is only detected about 20-30% of the time, but can be determined manually for about 60-80% of the posts, so there’s some heavy lifting to be done.

The rest of the scoring columns you see are categories of information we determined would be insight-fertile categories for the client in question. We read each post to understand the context of the mention. Was it a promotion from a re-seller or a recommendation from a customer? Did they use any emotions in their description of the product or service (not did a word that describes an emotion appear in the text, which is what social listening platforms present)? What feature of the product did they mention specifically? Was there a specific issue or topic about that feature that stood out? What was the use case of the product or service (in this case, what type of location)? And were there other use cases that emerged than what the product or service was primarily sold for?

Planning your coding means anticipating where you’ll find the most useful answers in your research. It’s the social media analysis equivalent of crafting the right questions in traditional market research. We like to think we’re pretty good at that part. Hopefully, this helps you get good at it, too.

If you’d like to see what conversation research can reveal about your business, customers, competitors and marketplace, drop us a line. We’d be happy to discuss it with you.

March 21, 2017

Does How Customers Identify Your Products Matter?

Dirt Devil has seven categories of products. Each of those categories has up to 28 different products within it. So it’s not enough to say, “You should buy a Dirt Devil.” That will only confuse the customer when they go to the website or march up and down the vacuum aisle looking for the recommendation. If there are dozens to choose from, how is the customer to know what’s best for her (or him)?

As Malcolm Gladwell so eloquently explained in a now famous TED talk, at one point in time, customers wanted choice. Diversifying it’s product offerings made Ragu a mint. Then there’s more recent examples about the paralysis of choice that indicates too many and customers will not buy anything.

We looked at some conversation research around the Dirt Devil brand and discovered that while we cannot pinpoint a correlation between too many choices and hard sales data in online conversations, we can observe how Dirt Devil customers refer to the products they use.

Dirt Devil Conversation Research - Products UsedWhile the categories led the way — indicating consumers are more apt to describe their Dirt Devil in broad form — there were several attempts at identifying specific product names and models. AccuCharge and SimpliStick were among the top six results of product identifiers in our research. That could speak to strong branding for those products. But why don’t others emerge? Does the brand divide its marketing among different agencies or marketing initiatives? Is that why some standout and others don’t?

Online conversations my not tell us the answers to those very specific questions, but a hearty conversation internally might.

And is it more beneficial to have everyone referring to Dirt Devil stick vacuums as such rather than in hodgepodge ways of reference? One hundred people shouting praise on a Dirt Devil stick vacuum is probably more beneficial than 24 complimenting the SimpliStick while 15 talk about the Power Stick and nine refer to the Power Air Stick, right?

This exercise is not to imply that Dirt Devil has a branding problem or unnecessary confusion among consumers about what products they offer. It is simply a way to open a dialog about the Paradox of Choice and whether or not branding initiatives could or should solve for it.

Word-of-Mouth Marketing is up to 200 times more effective than advertising, according to the Word-of-Mouth Marketing Association. Shouldn’t your brand’s conversation focus then be on unifying how people talk about you so you can deliver a more consistent wave of conversation when they do?

It’s certainly good food for thought and something you’ll never get a grip on unless you’re studying the online conversation about your brand. If you need help doing that, drop us a line. We’d love to help.

March 14, 2017

Why Tag Clouds and Topic Wheels Hold You Back

One of the most common forms of data visualization among social listening softwares is the tag cloud. The graphic representation of which topics are the most common organized by word, size and color is easy for the layman to decipher so it is dangled at the end of the software company’s string like top sirloin.

But it’s just a chicken nugget. Or, more aptly, just the breading around the chicken nugget.

Tag Cloud

Topic wheels are a bit more informative methods of data visualization. They enable you to see subtopics easily. But the data visualization is still just a superficial layer around the insights your data contains. They help you see one layer down.

Topic Wheel

But insights are seldom found one layer down. Understanding of the conversation — the why behind the emerging topics and themes — means drilling down deeper.

Take our study of Dirt Devil, for instance. We may notice looking at Tag Clouds and Topic Wheels of data visualization that durability is an issue that surfaces in the negative conversations around the brand. But why does Dirt Devil have durability issues? To know that, you have to drill down into the negative, then into the durability topic, then analyze and understand the various issues there.

Conversation Research breakdown

The level of detail that can provide a product manager with actual insights to improve the product is not found using a tag cloud or a topic wheel. It’s found by diving in and analyzing and understanding the full context of the conversation. With this information — certainly represented visually for ease of understanding — we can tell the product manager that there are structural issues in quality of construction, weakness in the unit handles and motor issues, particularly when used for pet hair. These insights give the product team direction so they can either A) Ask deeper questions in further research or B) Focus on the opportunities to improve the product.

The overarching point is that if you’re relying on visualizations of your data rather than analysis of it, you’re missing a lot. In fact, we would surmise you’re missing everything.

We would love to help understand your data. Want to know more about what customers say about your brand? Your products? What you can do better? Drop us a line. We can help.

March 7, 2017

Why CMOs Aren’t Using The Data They Pay For

Chief Marketing Officers (CMOs) are spending more on analytics now than ever before, but also admit that barely 1/3 of the data they’re paying for gets used. That’s according to The CMO Survey from the American Marketing Association, Deloitte and Duke University’s School of Business. One of the biggest reasons CMOs aren’t using the data? They say it’s too complex, lacks insight and relevance.

This is exactly why we started the Conversation Research Institute. No, we’re not going to solve that problem for all aspects of marketing analytics. But when a CMO gets a report from a social listening platform, it’s a vague assortment of charts and graphs. It doesn’t explain the WHY any of those bars are as big or small as they are, the pie chart looks the way it does or the colors are one way or another.

What Factors Prevent Your Company From Using More Marketing Analytics?

When you pay for software as a service, all you typically get is the software. The service part of it doesn’t refer to someone to serve you insights or make the software work for you.

CRI is focused on taking either your existing social listening software or implementing the software we use on your behalf, then interpreting that data so it is:

  1. Easy to understand
  2. Delivers insights you can use
  3. Focuses on the voice of your consumer to deliver relevance

CMOs are too busy and have too little time to interpret the data they receive. A strong analyst is going to see that and deliver what the CMO needs when he or she needs it. They’re going to focus on the stakeholders in question and on the issue of relevance. Without those two focal points, no amount of data or charts or graphs will help the CMO make decisions.

It is true. CMOs are spending more money on analytics. According to the study, analytics will jump from around five percent of marketing budgets to almost 22 percent in the next three years. Why on earth would they pay more money for something they use less than 1/3 of?

We owe it to ourselves as analysts and evangelists for conversation research, social listening and social analytics to close that gap and ensure that CMOs are getting their money’s worth. We know what we’re doing about it at CRI. What are you?

February 21, 2017

Join Us at #IBMAmplify March 20-23

For those interested in the world of Artificial Intelligence and Conversation Research (which is driven by A.I. algorithms), IBM Amplify 2017 is a must-attend event. And I’m pleased to report that I have been invited to speak as part of the event’s innovation leaders series.

The event will be March 20-22 at the MGM Grand in Las Vegas. You can register online at http://ibm.com/amplify

IBM Amplify 2017My talk, which is slated for March 22 at 10:15 a.m. local time, will focus on the need for human analysis to close the gap on Artificial Intelligence and its usefulness when used to make sense of unstructured data. As with most talks I give, it will raise a few eyebrows, but hopefully push the industry forward in building A.I. that works better.

And for those of you interested, I can score you a VIP invite to an influencer dinner with myself, Jay Baer and others. Just drop me a line before you register and I’ll tell you how to score that invite!

CRI is excited to be represented at what is essentially the thought leadership home for A.I. and it’s conversation research offshoot. IBM Amplify is essentially the user conference for IBM’s fabled Watson A.I. engine. To be included is a nice honor for both CRI and me.

See you in Vegas!

February 16, 2017

How Analyzing Online Conversations Builds a Better Brand

The fun for me in analyzing online conversations is the proof points the data provides. No longer do product, experience or marketing communications decisions have to be left to assumptions. The data allows you to turn them into assertions.

In our recent report on senior living, we analyzed online conversations of people discussing the major types of senior care facilities. We found hundreds of conversations mentioning nursing homes, assisted living facilities, independent living facilities and long-term care options. We broke each of those conversations down by facility, sentiment and topic.

When you do this, you get a glimpse into what consumers truly think. Not only are we not prompting them for answers, which in and of itself biases the information, but we’re simply recording when they talk about the topic in question voluntarily and freely.

What does this type of analysis tell us? Take for instance this visualization:

Assisted Living Family Experience Negative Conversations

This is a breakdown of the conversation topics within the posts we categorized as focusing on assisted living facilities where the main topic was the experience of the family of the patient (which is important since the primary buyer is the adult children of the patient), and those experiences were scored as having a negative sentiment. So 30% of all negative conversations about assisted living facilities (represented in the circle to the left) were determined to be about the family experience. The right hand circle breaks those down by specific topic.

What this tell us is that 32% of the negative family experience conversations were about shopping for the facility overall. What is it that is so bad about it? We’d need to move a layer farther in analysis to discover that, but since we have the data, we can! Another 32% mentions they prefer an alternative to an assisted living facility. Further analysis shows that they don’t prefer independent living or nursing homes, but rather staying home and not needing a care facility at all.

While this may seem a logical conclusion if you understand the consumer, that has not been statistically proven before, to our knowledge. Now it has. But that insight can also give assisted living marketers more pointed insights to develop better copy, sales materials or even sales strategies, enhancing conversions and driving more customers.

Emotions while enrolling and family in-fighting are significant portions of the negative family experience, too. What can that tell an assisted living marketer hoping to land more clients? Those conversations can be further vetted to see if common threads run throughout.

The more you peel back the layers on analyzing online conversations, the more interesting nuggets you discover to fuel decisions for marketing, user experience or even product development. And those can build a better, more profitable brand.

The only question left to answer is why haven’t you started?

For more analysis of online conversations around the senior living industry, including a mapping of the buyer journey for senior care, see our Conversation Report. For more about how CRI can help you in analyzing online conversations around your brand or market, drop us a line.

January 24, 2017

What Social Listening Tools Don’t Tell You (That Conversation Research Does)

If there is one core reason the Conversation Research Institute exists it is that social listening tools only collect and represent data. They don’t analyze it. Try all you might, but you will never get an algorithm to produce 100% reliable insights from counting things. It’s the human processing of the counted things that results in usefulness of that data.

Case in point: The topic analysis of social listening tools. What this feature does in most softwares designed to “listen” to online conversations is count keywords. Topic analysis are often presented in word clouds. We prefer to look at them in terms of pie charts so there’s a more natural understanding of the volume of that particular topic in relation to the whole.

Here’s an analysis of specific “Dislikes” around Kentucky Fried Chicken I conducted in 2012. This is very much like the topics chart that a social listening platform would produce. You can see that 30% of the negative conversations mention “chicken,” eight percent mention “chip,” and so-on. (Note: Because this was produced from an automated topic analysis, the keywords it counted and collected are raw and what was present online in the conversation at that point in time.)

Topic Analysis Example - KFC

But looking at this you only know that these keywords were present or popular in those conversations. You don’t know the critical, insight-producing answer, which is to the question, “Why?”

When you perform conversation research, even if you do it using automated tools, you dig a layer or two down to uncover that answers. So here’s a look at Olive Garden’s negative themes from that same research in 2012. We broke out the negative theme of “Taste” to show that the qualifiers … leading to the answer of “Why?” … include Nasty, Disgusting and Like Shit. There’s also Bland, Gross, Like Microwaved Food and Weird.

Topic Analysis - Olive Garden

So we can now apply more specific context around why people who didn’t like the taste of Olive Garden. Drilling down yet another level, to analyze the mentions of “Nasty” or “Disgusting” to see if there are specific menu items, specific reasons or perhaps even specific locations where those qualifiers emerged, we may uncover insights that inform Olive Garden’s business.

The point here is certainly not to pick on KFC or Olive Garden. These charts were produced in 2012 using automatic theme analysis. Chances are, the results today would be very different. But the automatic theme analysis is the key point to consider. At Conversation Research Institute, we insist on human analysis to break down the “Why” and offer more concrete insights to help your brand.

While a few researchers can’t possibly analyze hundreds of thousands of conversations manually, our process is a two-step one for larger conversation sets. We first isolate posts we consider to be the voice of the consumer. That definition changes slightly depending on the client and project at hand. Once we have filtered out posts that do not fit that definition, if necessary, we sample at rates much higher than traditional research sampling standards.

The bottom line is this: If you are relying on machines to spit out insights, you are being tricked into thinking charts and graphs are insights. There’s a big difference in counting (to get the what) and analyzing (to get the why).

Let us help uncover more about the voice of your consumer. Drop us a line today for an introductory discussion.

January 17, 2017

Sneak Preview: Senior Care Industry Report Shows Conversations Happen In Known Communities

Our first industry report is due out any day now. The Conversation Report: Independent Living to Nursing Homes: Understanding the Buyer Journey for Senior Care looks at online conversations over the course of a calendar year in which people discuss senior care facilities and services with some level of intent to buy. We’ve researched, indexed and analyzed over 19,000 conversations, surfaced almost 1,200 that are true voices of the consumer and have a laundry list of insights to share with those buying the report.

To ensure you get first chance to download the executive summary and purchase the full report, be sure to join our list. The report is due out any day now.

Our exploration surfaced many insights about senior care shoppers we didn’t expect to find, as well as some we did. While I personally had not explored the conversation set in the senior care industry much before the endeavor, my experience with conversation research as a whole tells me that consumers have conversations in exactly the types of communities that social media marketing often ignores: forums and message boards. And for senior care, that is accurate.

So while we can all agree that Facebook, Twitter, LinkedIn and other social networks are the sexy, consumer-driven platforms that quickly surface as popular for social media, as brands we should understand that consumers often turn away from them and to known and more intimate communities for recommendations, referrals and support during buying decisions. In my experience, the more personal and private the decision, the more this hypothesis proves true.

Forums and message boards make up more than 80% of the online conversation about the senior care space. Consumers there turn to communities they trust built around the topic at hand (AgingCare.com was popular) but they also turn to known communities — ones which they are already a member of for other reasons (WeightWatchers.com ranked high as well).

For brands this means to truly engage potential customers, you have to be more aware of social media than most seem to be. Facebook and Twitter alone won’t cut it. Minding your own social profiles doesn’t scratch the surface of where your audience is engaging around the topics most likely to lead to new business for your brand. It also means investing in true community managers who go beyond minding the social profiles and assimilating into existing communities to be a formal or informal representative of the company could be a smart play.

While charts like this have existed for years and the knowledge that forums and message boards play a big part in any brand’s online conversations is not new news, it is shocking how poorly brands have adapted to it. We found no instance of a brand representative responding to these forum posts.

Don’t miss more insights in the upcoming Conversation Report: Independent Living to Nursing Homes: Understanding the Buyer Journey for Senior CareSubscribe to our updates on the form on our home page.

 

January 10, 2017

Identifying the Buyer Journey through Conversation Research

The first Conversation Report is due out any day now. Our dive into understanding the buyer journey for the senior care space, which includes nursing homes, assisted living, independent living and more, is coming in at around 15,000 words, over 75 charts and graphs and dozens of insights we’ve synthesized from the data that help senior care brands understand their customers better.

In conducting the research, we had a peculiar challenge. Our goal was to not only find only the posts produced by true customers, but those actively considering senior care options for themselves or a loved one. How do you isolate not just the consumer, but one that is actively looking without knowing first who they are or where they are — both answers you get from the research?

It seems a Holmesian Catch-22.

But it’s not.

All of our broad level research begins by trying to understand the consumer’s conversation habits first. We seek and discover individuals who have been, or are going through, the buying process and interview them. But we don’t necessarily ask all the questions we hope to answer with the research. Instead, we focus on how they go about discovering information about the product or service at hand. We uncover how they talk and think about the topic in terms of lexicon and verbiage. We try and get at what they might say in an online conversation should they resort to social media and online communities to ask questions about the topic at hand.

By canvasing a small focus group on how people talk about buying or shopping for the product in question, we can then produce more accurate search variables to uncover similar conversations on the web. Consider it our social media version of Google’s Keyword Tool. While search terms also contribute to our pool of knowledge and understanding about the audience, people may search for “senior care” but they don’t use that term in a sentence when chatting about the search for a solution online.

As you approach conversation research, you should consider that your assumptions about your audience and how they discuss certain topics in online conversations are biases. You need to vet them properly to get to a more accurate read on what is being said. One misstep in the search variable construction and you could eliminate thousands of relevant conversations. Or, perhaps worse, you could create thousands more to weed through that aren’t relevant at all.

This reinforces something you’ll hear us say over and over at CRI: Social listening software isn’t enough. You have to add the human element to your data gathering mechanism to make sense of all this noise.

How do you go about constructing your searches? We’d love to hear your thoughts and processes in the comments.

January 5, 2017

How Audience Index Can Produce Insights in Conversation Research

It’s one thing to know what percentage of a given audience is male-female, different ages, ethnicities and so on. It’s another to understand how that audience compares to the norm. Indexing a given set of results against a generally understood or accepted point of reference not only frames the context of that audience characteristic, but can help you elevate important insights in conversation research.

Some social listening platforms offer audience indexing in the demographic and psychographic data. This seldom used and often misunderstood statistic is one we constantly refer to at CRI since it can lead to more intimate understanding of the overall make up of a given audience.

To better understand indexing, take a look at this chart on a given audience’s ethnicity. Its primary function is to show the percentage of the audience broken down by ethnicity.

But we’ve also displayed the index compared to the general demographic profile of a commonly used site (in this case, Twitter). We know from multiple resources (Pew, Northeastern University, etc.), in general, Twitter’s audience parallel’s the U.S. population in terms of ethnicity.  Even with some variations considered, at a minimum, we are comparing our audience to an audience of people who are active social media users.

Indexing audiences in Conversation Research

As you can see in this audience, caucasians index at a 1.14 rate. That means that this audience if 14% more likely to be caucasian than the base audience of Twitter users. So it skews white. It is comprised of slightly more African-Americans, 19% less Asian, a bit more less American Indian or Native Islander and “other.”

But look at the Hispanic index. An index of 0.28  means this audience is almost 80 percent less likely to feature Hispanics than the base audience of Twitter users.

What does this tell us? It could tell us a few things:

  • Hispanics aren’t talking about this topic (if you’re doing conversation research) or buying this product (if you’re analyzing sales data)
  • The industry or brand in question does not appeal to Hispanics
  • The industry or brand in question ignores Hispanics

The definitive answer would require more detailed research, but seeing the huge disparity in the indexes gives us reason to investigate and perhaps an opportunity to fuel decisions to improve the business.

And keep in mind that demographics aren’t the only thing that can be compared in index form to Twitter or other data sets. You simply need a known and common data points. In CRI’s research, we frequently surface indexing for age, gender, ethnicity and geography, but also social interests, professions, bio terms and more.

Indexing is a powerful statistical feature to understand as a researcher or a marketer. Understanding it could be the key to unlocking equally as powerful insights for your business.

For help with understanding your audience and how they index compared to known audiences, drop us a line. We’d love to help.

Interested in learning more?
Subscribe to our free newsletter and get a free style advice every week. We will also notify You about new offers and discounts. Check it!