• Background Image

    The Conversation

    conversation research

December 13, 2016

Testing Underlines the Importance of Facebook Topic Data

Facebook Topic Data is perhaps the most important, yet underestimated sea of consumer data in existence. One notable C-level executive at a social listening software company told me recently that they weren’t focusing on Facebook Topic Data much because their clients didn’t show much interest in it.

I, for one, hope that changes quickly. And the reason is simple math.

In several separate tests over the last six months, we at Conversation Research Institute have entered standard brand and topic queries into popular social listening platforms. Our testing included NetBase, Nuvi, Brandwatch, Sysomos and Mention. In each of our tests, we looked at how many conversations over a given time period surfaced, then we input the exact same search threads into a Facebook Topic Data search to compare.

Almost to a decimal place, we could predict how many conversations would surface on Facebook based on the number of conversations we found on the open web. Would be surprised to hear that Facebook is nearly 1.5 times as many?

That’s right. According to our testing over around a dozen or so different terms, Facebook accounts for around 60% of the online conversation. In some cases, it’s even higher.

A recent search conducted for a client in the pest control industry turned up 58,000 interactions on Facebook in the time frame of Nov. 22 until Dec. 12. When you use Facebook Topic Data, the validity of the posts you receive is managed by DataSift — Facebook’s exclusive data provider. What that means is they do the disambiguation to ensure the posts you get are the posts you want, rather than ones that include irrelevant topics.

Out of those 58,000 interactions, we estimate that only about 10% of them are irrelevant — those not caught by DataSift’s processing. (In all fairness, two articles that were shared in the period included references to politicians, lawyers and even a religious group as “termites” which is difficult to eliminate without manual analysis.)

However, in that same period of time, searching the open web for the same exact Boolean thread, we found 8,690 mentions. Some 4,560 of them were on Twitter with Reddit (352) coming in a distant second. And those numbers do not factor in disambiguation (meaning the open web search would net far fewer results).

Twitter is the darling of the social analytics industry because it’s free and open to analyze. Facebook is a walled garden that protects its users’s posts from one-by-one cataloging and analysis by the social media softwares of the world. While the challenge exists that Facebook Topic Data does not provide post-level data, meaning you can’t index and analyze every single post individually, the sheer volume of conversation makes it important.

But make no mistake about it: Facebook is where most online conversations happen. And Facebook Topic Data is going to be essential research fodder for anyone interested in understanding their customers.

Need help finding and analyzing Facebook Topic Data for your company? Drop us a line. We would love to help you understand the conversation.

November 21, 2016

An Example of Why Social Listening Needs Conversation Analysis

A key value proposition for Conversation Research Institute is that we offer conversation analysis to social listening data that finds the insights that can help drive your business decisions. But that’s not just a fancy set of words, there’s real reason behind it.

First, know that what we mean by offering analysis is that social listening tools themselves aren’t enough. Pretty charts and graphs and word clouds don’t do your business any good if you can’t explain what they mean, how the data was discovered and what insights surfaced that can help you.

Conversation analysis

No social listening software does that for you. You have to have conversation analysis – from a human being – to understand the data and surface that information manually.

Case in point, while working on a research project for an upcoming Conversation Report, we found this entry in a sea of data about the elderly care space:

“The social worker at the nursing home ~ when mom first went there ~ had to go to bat for mom and went to court to get a guardian (not my brother) for mom.”

The software in question gave this entry a neutral sentiment and picked out no sub-topics or themes for the entry. The software surfaced “social worker” “nursing home” and “guardian” as word cloud entries, but again, did not attach any sentiment or context to them.

Because we are manually scoring and analyzing this data, and our perspective is to look at the voice of the consumer as it relates to the elderly care providers (nursing homes, assisted living facilities, independent living communities and other long-term care providers), we add lots more context to the analysis:

  • The sentiment is negative toward the nursing home because the patient needed an advocate
  • The sentiment is positive toward the social worker who served as advocate
  • The source is a family member
  • The theme is patient advocacy
  • A sub-theme is non-family guardianship

And that’s before we went to the original post (which has other excerpts appearing in the research) to clarify even more:

  • The brother in question filed for guardianship after lying for years about having the mother’s power of attorney
  • The social worker was advocating for the patient, but also the rest of the family
  • The author (a daughter of the patient) was considering hiring a lawyer to fight the brother’s claim for guardianship.

So family in-fighting over the burden of care, cost and decision was another important theme.

When you let a computer spit out analysis of tens of thousands, or even millions, of conversations you get roughly one tenth of the context and actual insight possible from truly understanding what is being said. Certainly, on scale there’s no way to be as thorough.

But relying on automatic charts and graphs is keeping you away from the one thing you’re looking for: True consumer insight.

That’s what we surface. If you’re interested in finding it for your brand, let us know.

 

November 16, 2016

Understanding your audience with conversation research

There’s a spirits brand we’re familiar with at the Conversation Research Institute, not because they’re a client, but they’re a favorite for us when we break for a drink at the end of the week. Their marketing is not unlike other sprits brands in their category. It’s focused on tradition, heritage and quality. It’s aimed at men and of a particular status in life.

Honestly, you could take one of about two dozen brands in this category and put them in the same advertisements or even social media posts and, generally, the communications would work.

But we did some snooping around the conversation about the brand and found something interesting. The professions of the people who talk about the brand don’t exactly align with who the brand thinks they’re talking to.

Over the course of two months time, almost one fourth of the authors talking about the brand online listed themselves as artists. While certainly more research needs to be done to determine what type, what gender, how serious and the like, if you are targeting your messaging at male executives, does this data not give you pause?

Yes, 15 percent of the authors talking about the brand fall into the executive label. But the labels of “artist” “teacher” and even “journalist” add up to almost half of the online conversations about your brand, don’t you think segmenting and targeting them could result in more, bigger or better?

Conversation research isn’t just about finding sentiment and tone. It’s about uncovering insights about your brand that help you make critical marketing and business decisions. This particular brand of spirit is missing out on a huge content marketing or even targeting paid spend potential if they aren’t paying attention to the data that conversation research can unearth.

More can be had for your brand. Let us know if we can help.

November 1, 2016

Can Conversation Research tell you why sales are down?

 

A large national retailer in the food and beverage industry was riding high last year. Sales were up, the brand was healthy, consumers were immersed in the experience. Years of hard work had put the brand on the top of the heap in their category.

But then they noticed that sales of certain beverages had started flat lining. They couldn’t quite figure out why. Nothing in their formulas had changed. Customers weren’t indicating why they were switching drinks or passing on the drinks when they ordered. What was the brand to do?

They turned to online conversations and posed the question, “Are sales for these drinks flat lining because of a consumer shift or something else?” Consumers would likely tip their hand if it was the former. If the research was inconclusive, it wasn’t likely because of a consumer need, but something else.

The conversation research for the brand turned over an insight that explained it. The brand’s customers were becoming increasingly concerned about the sugar content of the drinks in question. They were interested in more healthy options.

So the brand formulated a new line of fruit-based, all-natural drinks just in time for spring.

The sugary drink sales stayed flat while the new line took off, exceeding expectations and satisfying customers.

So yes. Conversation research can tell you why sales are down. It may also tell you how to make them go the other direction.

Call us to see how conversation research can help your brand.

October 24, 2016

A peek inside Conversation Research around the travel industry

Understanding how conversation research data can help your business is certainly your first step in knowing what to ask for, who to ask it from and how you might approach discovering insights for your brand. There’s high-level data that points you in a general direction, then specific, granular research that can point to specific insights that help you make decisions.

I recently had the honor of sharing information about conversation research to the audience at TBEX, the world’s premier travel writing and blogging conference, in Manila, Philippines. In preparation for that talk, I recorded a little video to share some of the differences in high-level vs. specific insights with you. I also talk a bit about a specific example of a high-level insight that led to answers at a granular level.

So, what questions do you have about your business or industry that the consumer conversation may answer? I’d be happy to tell you as a response how conversation research may be able to help. Go ahead — the comments are yours!

October 10, 2016

Diet soda buzz is flat, but so are listening standards

As I write this, I’m on day nine without drinking diet soda. This coming from someone who has probably averaged 6-12 cans of soft drink per day since childhood. And no, I’m not exaggerating.

The caffeine withdrawal headaches are gone, but I still don’t like drinking water all the time, though I do feel a bit lighter and healthier, which was the point.

While I jokingly said when I started this process that the sales and marketing teams at Diet Pepsi were in for a rough fall wondering why their Louisville, Ky., volume just disappeared, it seems I may be the least of their concerns.

Engagement Labs released a report last week of soft drink brands that shows a surprising decline in online and offline conversations about diet sodas. Their report claims consumer’s passion for diet soda has “gone flat” but that people are still talking about their love for sugared soft drinks more than ever.

Engagement Labs combines online conversation research, like that I am a part of at the Conversation Research Institute, with person-to-person discussions in focus group form. They combine those two scores into what they call a “TotalSocial” tool and present a baseline score to compare brands.

While all the details of how the score is formulated are certainly proprietary, if you assume all are scored on the same measurement system, the results are intriguing.

 

Coca-Cola is the standard bearer of the soda world, as you would expect, scoring a 50 on the TotalSocial scale. The industry average is around 40. Diet Mountain Dew and Diet Coke are the only two low-calorie options that hit that 40 mark, the rest are below. Diet Pepsi (30), Coke Zero (31) and Diet Dr. Pepper (36) are at or near the bottom of the list.

The main concerns or topics Engagement Labs points to as reasons? Health concerns about sugar and artificial sweeteners, the push for natural ingredients and backlash to recent formula changes by some brands. Engagement Labs offers the opinion that soda brands need to find ways to drive positive consumer engagement for their diet soft drinks the way many do for their sugary brethren.

Of course, Engagement Labs is a marketing company with what looks like a subscription-based measurement tool trying to hook a brand or two as a client, too.

When I see data like this, I’m certainly interested. Looking at how one company, agency or tool ranks and qualifies social media data is always interesting. My skeptic brain kicks in and tries to punch holes in the methodology.

While I don’t know a lot about Engagement Labs’s approach (maybe they’ll chime in and enlighten us in the comments), my skepticism tells me they’re likely using some mass social media scan using a listening platform without appropriate disambiguation. But that’s balanced by the fact that claim to also offer focus group-esque person-to-person interviews. And those require some work and often offer much more valid responses as the questions can be directed.

We don’t really have an industry standard for analyzing and understanding online conversations.

What these reports and surveys typically lead me to, however, is that we don’t really have an industry standard for analyzing and understanding online conversations. Each tool brings in its own volume of online conversations and the volumes never match. NetBase might show 380,000 mentions of a brand while Crimson Hexagon shows 450,000, Brandwatch 145,000 and Radian6 something completely different.

This is why CRI takes a tool agnostic approach. We’d rather assume that sampling enough from each and pulling together an aggregate survey of the online conversation space gives us meaningful data. At least more meaningful than what any one tool offers.

And certainly one that I can defend to clients who won’t then drive me to drink (Diet Pepsi) again.

For more on how the Conversation Research Institute can help you uncover insights about your customers or brand, give us a call or drop us a line.

October 4, 2016

The Achille’s Heel of Social Listening Software

If you use social listening software there’s a good chance you share a frustration with thousands just like you: You can never get the right data. Disambiguating online conversation searches is part Boolean Logic mastery, part linguistics and part voodoo. Or so it seems.

Disambiguation refers to weeding out all the data that comes back in your search query that isn’t relevant. It is a fundamental skill in the practice of conversation research. Type in the brand name “Square” for instance, and you’re going to have a hard time finding anything that talks about Square, the credit card processing app and hardware. Instead, you’ll find a sea of mentions of the word “square” including stories about Times Square, the square root of things and 1950s parents their children referred to as squares.

Disambiguation is a big problem for social listening platforms, yet most of them completely ignore the end user’s need for help. Some have build Boolean logic helpers in their software. Sysomos and Netbase have nice ones. But the only marketing professionals (who this type of software is targeted for) who understand Boolean logic switched majors in college.

What happens when someone who isn’t fluid in Boolean logic searches for conversation topics? You get a lot of results you aren’t interested in. And sadly, most end users of these software platforms don’t know any better. They see results, know they can output a couple charts or graphs for the monthly report and they’re done.

But the results they’re looking at are still littered with irrelevant posts. You can tweak your Boolean string all you want, but you’re likely to come up with something that looks right, but isn’t. And we haven’t even gotten to the Achille’s Heel yet!?!

Case in point: I did a recent brand search for a major consumer company last week. This was a simple brand benchmarking project where I was simply trying to identify all the conversations online that mentioned the brand, then decipher what major topics or themes emerged in those conversations.

My first return from the software was 21,000 conversations. As a reviewed them, I realized there was a lot of spam. After three hours of Boolean revisions, I narrowed the automatic results list to 1,654 conversations. But guess what? While they all were valid mentions of the brand, many of them were job board postings, stock analysis and retweets of news items mentioning the brand. None of these categories — which will likely show up in the automated searches for any sizable brand — are relevant to what the client asked of me: What are the topics of conversation when people talk about us?

So I manually scored the 1,654 conversations, creating categories and sub-categories myself. I also manually scored sentiment for any that made it to the “relevant” list. Here’s what I found:

  • 339 relevant conversations (* — Achille’s Heel coming)
  • 50% were negative; 32% positive and 18% were neutral (compared to the automated read of 92% neutral, 5% positive and 3% negative)

And here’s the Achille’s Heel: (Some topics redacted for client anonymity)

 

Despite manual scoring and categorizing, the majority of results I found were in a category I called “News Reaction.” These were almost all re-tweets of people reacting to a news article, which were removed in my automatic disambiguation process. The client doesn’t care about the news article (for this exercise) but for what consumers are saying.

The Achille’s Hell of Social Listening platforms is they generally do not automatically disambiguate your data well and even when you manually score it, there are reactions and by-products of original posts included that you don’t care about. (There are probably also ones not included that you do, but my guess is those are of less concern if your search terms are set well.)

This is the primary reason conversation research cannot be left to machines alone. For the platforms by themselves will make you believe something that isn’t actually true.

For more on how conversation research can help your brand or agency, give us a call or drop us a line.

 

 

Here’s where deep conversation research comes in. This is a topic chart for a major consumer company’s online conversations for a three month span

September 27, 2016

Why small samples matter in Conversation Research

 

Conversation research is distinct from traditional market research in that it is largely unstructured. We use a variety of softwares and tools to process the data sets to produce some degree of organization – topics, sources, themes, etc. – but you’re not pulling a sample of 1000 people of a certain demographic and asking them the same questions here. You’re casting a wide net looking for similarities in random conversations from around the world.

So when your review comes back with 100 conversations out of 23,000, it’s easy to dismiss this percentage (less than 0.02) as not valid. But let’s look at an example and see if validity needs to be reconsidered.

CRI recently conducted a high-level scan of the online conversations around work-life balance with our friends at Workfront. The project management software company focuses a lot of its content on work-life balance as its solution helps bring that result to marketing agencies and brand teams around the world.

Over the 30-day period ending September 19, we found 23,021 total conversations on blogs, social networks, news sites, forums and more – essentially any publicly available online source where people could post or comment – about work-life balance.

If you focus on the 23,021 as your total pool of conversations, it might frustrate you that only eight percent (1,827) could be automatically scored for sentiment. (One can manually score much more, and CRI typically does a fair amount of work to close that gap, but it is an exercise in time and resources that for this project both parties elected to set aside.)

But if you take that eight percent – those 1,827 conversations – and now consider them your sample set, you’ve got something. There, we discover that 79 percent of the scored conversations were positive – people are generally in favor of or have good reactions to the concept of work-life balance. But that means 21 perent of them don’t.

And this is where our curiosity is piqued.

 

It turns out the predominant driver around the negative conversations on work-life balance is that the concept itself is a myth. Out of the 382 total negatively scored conversations found, 98 of them indicated in some way that work-life balance was a lie, a farce, an illusion and so on.

Another 10 were tied to a conversation around a piece of content exposing the “lies” of work-life balance, also indicating there’s some level of mistrust that it is attainable. And 10 more revolved around a reference to work-life balance being overrated.

So while the negatively scored conversations were just 0.02% of the total conversation set, they were 21% of the total subset that could be scored for sentiment. And of that subset, more than one quarter were focused on the concept not being real at all.

This is where deeper analysis can help us synthesize true insight. Why do people think it’s a myth? Is it that the naysayers are likely cynics who cannot draw hard lines between their work time and focus and that which they spend away from work? Or do the demands of most jobs actually make it impossible to separate work from life? Or is it something else?

The bottom line is that one shouldn’t be dismissive of small data sets from big data, especially when it comes to conversation research. Remember that while we may only be talking about 100 conversations out of 23,000, but those 100 conversations are from people who are proactively discussing the topic at hand, not people being led down a Q&A path by an interviewer or a survey.

This brings delightful structure to that unstructured world.

September 23, 2016

What is conversation research?

Any research effort begins with the quest to define the problem. I suppose then any research business should start the same way. What exactly is Conversation Research and what problem does it attempt to solve?

Conversation Research is, simply, researching online conversations – those found in social media, or any other online mechanism that enables user-to-user discussion – with the purpose of discovering insight. We must keep the definition broad to allow inclusion of many varieties of sources, discussions, insights and purposes.

The Conversations Research Institute, for the record, focuses primarily on insights that drive business and marketing decisions. But our scope won’t always be limited there, either.

But aren’t we just saying “social monitoring” or “social listening” using synonyms? Not exactly. For me, social monitoring has always been a very reactive practice – one that is most commonly associated with customer service and reputation management. Wait until we see what people say before we do anything with it.

Social listening, on the other hand, has been more of the proactive practice. Let’s go look for mentions of something specific in order to learn or direct our future activities.

Software companies and consultants interchange both, though they are very different in intent. And both have been further lumped into the larger tab of “social analytics.” But this can include things like follower count, conversion rates and the like that a researcher mining for insights may or may not have interest in.

So Conversation Research is a different practice. It is analyzing the existing data around conversations among an audience segment. That segment could be a demographic, psychographic or set that contains some commonality, like all having mentioned a particular phrase or word.

The intention of Conversation Research is to deliver insight about the audience having the conversation. What do they say? How do they feel? What is their intention?

Knowing this information unlocks a third characteristic of a research audience. Instead of demographic or psychographic, it represents the social-graphic characteristics of an audience: What do they talk about in online conversations? What content do they read and share? What audiences to they influences? What influencers influence them?

All of these qualities of a given audience or audience member can unlock previously before unknown data about the customer. It can open doorways to new paths to engagement and conversion. It is market research done with online conversations as the focus group – the largest focus group ever assembled, mind you. And it has the potential to revolutionize the way we get to know our customers and prospects.

While Conversation Research is not intended to, nor should it, replace traditional market research. There are some interesting parameters to help consider leveraging this approach as a supplement to and in some cases instead of, traditional focus groups or surveys:

  • Conversations online are seen my far more people than hear them offline.
  • Conversations online are not led or framed by a questioner. You are mining real, voluntary, organic assertions from consumers.
  • Conversations online are not a snapshot in time but can be analyzed in real-time or as a trend over time.
  • While traditional research can offer more efficient sampling in terms of demographics, representative to national statistics, etc., conversation research can return hundreds of thousands of participants rather than samples of a few hundred people.

My colleagues and I have been mining online conversations for several years now. I was proud to publish what we believe to be the first-ever industry report based solely on online conversations in 2012. But now we are defining Conversation Research with a renewed focus and vigor.

Mining online conversations for insights from consumers is the next big trend in brands using social media. Conversation Research is here. The only question is how quickly will you reap the benefits?

For more on how The Conversation Research Institute can help, drop us a line or visit us at http://www.conversationresearchinstitute.com.

 

Interested in learning more?
Subscribe to our free newsletter and get a free style advice every week. We will also notify You about new offers and discounts. Check it!