•  The Conversation

    Turning Social Media into Consumer Insights.

October 10, 2016

Diet soda buzz is flat, but so are listening standards

As I write this, I’m on day nine without drinking diet soda. This coming from someone who has probably averaged 6-12 cans of soft drink per day since childhood. And no, I’m not exaggerating.

The caffeine withdrawal headaches are gone, but I still don’t like drinking water all the time, though I do feel a bit lighter and healthier, which was the point.

While I jokingly said when I started this process that the sales and marketing teams at Diet Pepsi were in for a rough fall wondering why their Louisville, Ky., volume just disappeared, it seems I may be the least of their concerns.

Engagement Labs released a report last week of soft drink brands that shows a surprising decline in online and offline conversations about diet sodas. Their report claims consumer’s passion for diet soda has “gone flat” but that people are still talking about their love for sugared soft drinks more than ever.

Engagement Labs combines online conversation research, like that I am a part of at the Conversation Research Institute, with person-to-person discussions in focus group form. They combine those two scores into what they call a “TotalSocial” tool and present a baseline score to compare brands.

While all the details of how the score is formulated are certainly proprietary, if you assume all are scored on the same measurement system, the results are intriguing.

 

Coca-Cola is the standard bearer of the soda world, as you would expect, scoring a 50 on the TotalSocial scale. The industry average is around 40. Diet Mountain Dew and Diet Coke are the only two low-calorie options that hit that 40 mark, the rest are below. Diet Pepsi (30), Coke Zero (31) and Diet Dr. Pepper (36) are at or near the bottom of the list.

The main concerns or topics Engagement Labs points to as reasons? Health concerns about sugar and artificial sweeteners, the push for natural ingredients and backlash to recent formula changes by some brands. Engagement Labs offers the opinion that soda brands need to find ways to drive positive consumer engagement for their diet soft drinks the way many do for their sugary brethren.

Of course, Engagement Labs is a marketing company with what looks like a subscription-based measurement tool trying to hook a brand or two as a client, too.

When I see data like this, I’m certainly interested. Looking at how one company, agency or tool ranks and qualifies social media data is always interesting. My skeptic brain kicks in and tries to punch holes in the methodology.

While I don’t know a lot about Engagement Labs’s approach (maybe they’ll chime in and enlighten us in the comments), my skepticism tells me they’re likely using some mass social media scan using a listening platform without appropriate disambiguation. But that’s balanced by the fact that claim to also offer focus group-esque person-to-person interviews. And those require some work and often offer much more valid responses as the questions can be directed.

We don’t really have an industry standard for analyzing and understanding online conversations.

What these reports and surveys typically lead me to, however, is that we don’t really have an industry standard for analyzing and understanding online conversations. Each tool brings in its own volume of online conversations and the volumes never match. NetBase might show 380,000 mentions of a brand while Crimson Hexagon shows 450,000, Brandwatch 145,000 and Radian6 something completely different.

This is why CRI takes a tool agnostic approach. We’d rather assume that sampling enough from each and pulling together an aggregate survey of the online conversation space gives us meaningful data. At least more meaningful than what any one tool offers.

And certainly one that I can defend to clients who won’t then drive me to drink (Diet Pepsi) again.

For more on how the Conversation Research Institute can help you uncover insights about your customers or brand, give us a call or drop us a line.

October 4, 2016

The Achille’s Heel of Social Listening Software

If you use social listening software there’s a good chance you share a frustration with thousands just like you: You can never get the right data. Disambiguating online conversation searches is part Boolean Logic mastery, part linguistics and part voodoo. Or so it seems.

Disambiguation refers to weeding out all the data that comes back in your search query that isn’t relevant. It is a fundamental skill in the practice of conversation research. Type in the brand name “Square” for instance, and you’re going to have a hard time finding anything that talks about Square, the credit card processing app and hardware. Instead, you’ll find a sea of mentions of the word “square” including stories about Times Square, the square root of things and 1950s parents their children referred to as squares.

Disambiguation is a big problem for social listening platforms, yet most of them completely ignore the end user’s need for help. Some have build Boolean logic helpers in their software. Sysomos and Netbase have nice ones. But the only marketing professionals (who this type of software is targeted for) who understand Boolean logic switched majors in college.

What happens when someone who isn’t fluid in Boolean logic searches for conversation topics? You get a lot of results you aren’t interested in. And sadly, most end users of these software platforms don’t know any better. They see results, know they can output a couple charts or graphs for the monthly report and they’re done.

But the results they’re looking at are still littered with irrelevant posts. You can tweak your Boolean string all you want, but you’re likely to come up with something that looks right, but isn’t. And we haven’t even gotten to the Achille’s Heel yet!?!

Case in point: I did a recent brand search for a major consumer company last week. This was a simple brand benchmarking project where I was simply trying to identify all the conversations online that mentioned the brand, then decipher what major topics or themes emerged in those conversations.

My first return from the software was 21,000 conversations. As a reviewed them, I realized there was a lot of spam. After three hours of Boolean revisions, I narrowed the automatic results list to 1,654 conversations. But guess what? While they all were valid mentions of the brand, many of them were job board postings, stock analysis and retweets of news items mentioning the brand. None of these categories — which will likely show up in the automated searches for any sizable brand — are relevant to what the client asked of me: What are the topics of conversation when people talk about us?

So I manually scored the 1,654 conversations, creating categories and sub-categories myself. I also manually scored sentiment for any that made it to the “relevant” list. Here’s what I found:

  • 339 relevant conversations (* — Achille’s Heel coming)
  • 50% were negative; 32% positive and 18% were neutral (compared to the automated read of 92% neutral, 5% positive and 3% negative)

And here’s the Achille’s Heel: (Some topics redacted for client anonymity)

 

Despite manual scoring and categorizing, the majority of results I found were in a category I called “News Reaction.” These were almost all re-tweets of people reacting to a news article, which were removed in my automatic disambiguation process. The client doesn’t care about the news article (for this exercise) but for what consumers are saying.

The Achille’s Hell of Social Listening platforms is they generally do not automatically disambiguate your data well and even when you manually score it, there are reactions and by-products of original posts included that you don’t care about. (There are probably also ones not included that you do, but my guess is those are of less concern if your search terms are set well.)

This is the primary reason conversation research cannot be left to machines alone. For the platforms by themselves will make you believe something that isn’t actually true.

For more on how conversation research can help your brand or agency, give us a call or drop us a line.

 

 

Here’s where deep conversation research comes in. This is a topic chart for a major consumer company’s online conversations for a three month span

September 27, 2016

Why small samples matter in Conversation Research

 

Conversation research is distinct from traditional market research in that it is largely unstructured. We use a variety of softwares and tools to process the data sets to produce some degree of organization – topics, sources, themes, etc. – but you’re not pulling a sample of 1000 people of a certain demographic and asking them the same questions here. You’re casting a wide net looking for similarities in random conversations from around the world.

So when your review comes back with 100 conversations out of 23,000, it’s easy to dismiss this percentage (less than 0.02) as not valid. But let’s look at an example and see if validity needs to be reconsidered.

CRI recently conducted a high-level scan of the online conversations around work-life balance with our friends at Workfront. The project management software company focuses a lot of its content on work-life balance as its solution helps bring that result to marketing agencies and brand teams around the world.

Over the 30-day period ending September 19, we found 23,021 total conversations on blogs, social networks, news sites, forums and more – essentially any publicly available online source where people could post or comment – about work-life balance.

If you focus on the 23,021 as your total pool of conversations, it might frustrate you that only eight percent (1,827) could be automatically scored for sentiment. (One can manually score much more, and CRI typically does a fair amount of work to close that gap, but it is an exercise in time and resources that for this project both parties elected to set aside.)

But if you take that eight percent – those 1,827 conversations – and now consider them your sample set, you’ve got something. There, we discover that 79 percent of the scored conversations were positive – people are generally in favor of or have good reactions to the concept of work-life balance. But that means 21 perent of them don’t.

And this is where our curiosity is piqued.

 

It turns out the predominant driver around the negative conversations on work-life balance is that the concept itself is a myth. Out of the 382 total negatively scored conversations found, 98 of them indicated in some way that work-life balance was a lie, a farce, an illusion and so on.

Another 10 were tied to a conversation around a piece of content exposing the “lies” of work-life balance, also indicating there’s some level of mistrust that it is attainable. And 10 more revolved around a reference to work-life balance being overrated.

So while the negatively scored conversations were just 0.02% of the total conversation set, they were 21% of the total subset that could be scored for sentiment. And of that subset, more than one quarter were focused on the concept not being real at all.

This is where deeper analysis can help us synthesize true insight. Why do people think it’s a myth? Is it that the naysayers are likely cynics who cannot draw hard lines between their work time and focus and that which they spend away from work? Or do the demands of most jobs actually make it impossible to separate work from life? Or is it something else?

The bottom line is that one shouldn’t be dismissive of small data sets from big data, especially when it comes to conversation research. Remember that while we may only be talking about 100 conversations out of 23,000, but those 100 conversations are from people who are proactively discussing the topic at hand, not people being led down a Q&A path by an interviewer or a survey.

This brings delightful structure to that unstructured world.

September 23, 2016

What is conversation research?

Any research effort begins with the quest to define the problem. I suppose then any research business should start the same way. What exactly is Conversation Research and what problem does it attempt to solve?

Conversation Research is, simply, researching online conversations – those found in social media, or any other online mechanism that enables user-to-user discussion – with the purpose of discovering insight. We must keep the definition broad to allow inclusion of many varieties of sources, discussions, insights and purposes.

The Conversations Research Institute, for the record, focuses primarily on insights that drive business and marketing decisions. But our scope won’t always be limited there, either.

But aren’t we just saying “social monitoring” or “social listening” using synonyms? Not exactly. For me, social monitoring has always been a very reactive practice – one that is most commonly associated with customer service and reputation management. Wait until we see what people say before we do anything with it.

Social listening, on the other hand, has been more of the proactive practice. Let’s go look for mentions of something specific in order to learn or direct our future activities.

Software companies and consultants interchange both, though they are very different in intent. And both have been further lumped into the larger tab of “social analytics.” But this can include things like follower count, conversion rates and the like that a researcher mining for insights may or may not have interest in.

So Conversation Research is a different practice. It is analyzing the existing data around conversations among an audience segment. That segment could be a demographic, psychographic or set that contains some commonality, like all having mentioned a particular phrase or word.

The intention of Conversation Research is to deliver insight about the audience having the conversation. What do they say? How do they feel? What is their intention?

Knowing this information unlocks a third characteristic of a research audience. Instead of demographic or psychographic, it represents the social-graphic characteristics of an audience: What do they talk about in online conversations? What content do they read and share? What audiences to they influences? What influencers influence them?

All of these qualities of a given audience or audience member can unlock previously before unknown data about the customer. It can open doorways to new paths to engagement and conversion. It is market research done with online conversations as the focus group – the largest focus group ever assembled, mind you. And it has the potential to revolutionize the way we get to know our customers and prospects.

While Conversation Research is not intended to, nor should it, replace traditional market research. There are some interesting parameters to help consider leveraging this approach as a supplement to and in some cases instead of, traditional focus groups or surveys:

  • Conversations online are seen my far more people than hear them offline.
  • Conversations online are not led or framed by a questioner. You are mining real, voluntary, organic assertions from consumers.
  • Conversations online are not a snapshot in time but can be analyzed in real-time or as a trend over time.
  • While traditional research can offer more efficient sampling in terms of demographics, representative to national statistics, etc., conversation research can return hundreds of thousands of participants rather than samples of a few hundred people.

My colleagues and I have been mining online conversations for several years now. I was proud to publish what we believe to be the first-ever industry report based solely on online conversations in 2012. But now we are defining Conversation Research with a renewed focus and vigor.

Mining online conversations for insights from consumers is the next big trend in brands using social media. Conversation Research is here. The only question is how quickly will you reap the benefits?

For more on how The Conversation Research Institute can help, drop us a line or visit us at http://www.conversationresearchinstitute.com.

 

Interested in learning more?
Subscribe to our free newsletter and get a free style advice every week. We will also notify You about new offers and discounts. Check it!