• Background Image

    The Conversation

    Uncategorized

March 23, 2017

The Truth About Cognitive Technology for Social Listening

On Wednesday, I presented a talk at IBM Amplify in which I explained the need for human analysis in social listening which produces what I call conversation research. I’ll admit the opportunity was intimidating. My task was essentially to look IBM executives, developers and users in the eye and say the social listening tool fueled by the famous Watson cognitive learning software was not very good.

But it’s not just Watson’s attempt at social listening that has issues. It’s all of them. Three of our most recent projects at the Conversation Research Institute tell a disappointing story. When we program these listening platforms to go find relevant conversations, we should see a respectable amount of just that in return. We don’t.

Social Listening Problems - Tim Moran photoFor our Dirt Devil project, we only scored 8.9% of the total posts our social listening software returned as relevant. It was worse for our industry report on the senior living space with only six percent of the results being the voice of the consumer. A brand study we did for a major healthcare company returned just 7.2% relevant results.

And we’re talking about three different social listening platforms. No one we’ve tested scores any better than these numbers.

What this means is without human analysis, scoring and curating of your social listening data — which is time and resource intensive — you’re paying for a lot of crap. And the technology is only getting incrementally better. It’s not growing by leaps and bounds the way the sales people tell you. Even Watson and IBM’s powerful engines have trouble weeding through and deciphering unstructured data like social conversations.

The truth is that when the data is unstructured, inconsistent and unpredictable, cognitive technology can only do so much. At least so far.

In the hopefully not-too-distant future we’ll be able to say, “Watson, find relevant consumer conversations about Dirt Devil vacuums and tell me the themes that surface around product problems,” then see meaningful results in seconds. But that day is farther off than you think.

In the meantime, CRI can help. Let us know how we might help you separate the signal from the noise and deliver consumer insights that help drive smart marketing decisions for your brand.

NOTE: Photo by Tim Moran, one of my fellow IBM Futurists.

March 2, 2017

How Conversation Research Can Help Discover New Product Lines

One of the first case studies of conversation research I can remember is when Listerine used social listening tools to find out what their customers said about their product. They discovered that many talked about the taste and burn in using it, but also that people sometimes use it to fight toenail fungus.

While that may sound funny, it’s no joke when you’re a brand manager and seeking to create new revenue streams for your company. Could Listerine have repackaged the formula as an application to help fight toenail fungus? Perhaps. Whether or not they did isn’t the point.

The point is are you looking at your customer’s conversations online to discover insights like use cases which can help your product team develop better or even different products?

We took a close look at Dirt Devil recently, not because they’re a client, but we wanted to run some tests on a consumer product that wasn’t difficult to distinguish from similarly named products. (We easily weeded out references to the weather anomaly and off-road race course and race truck by the same name.) These are the use cases we discovered:

Dirt Devil Use Cases - Conversation Research

Pet hair, hair, college dorms and hardwood floors were expected. But look some of the others.

An air pump for blacksmithing? That might be a new product idea. Sticker removal? That’s interesting. Beekeeping? Okay, maybe a little weird but what if you dive into the conversation and discover that the lower power hand-held vacs are able to remove bees from honeycombs without harming them and beekeepers everywhere would pay for one with larger receptacles?

If you’re only looking at word clouds and pie graphs of themes, sentiments and genders, you’re not seeing the fullness of the conversation. From a product innovation and feature enhancement standpoint, conversation research can be a treasure trove of opportunity and, eventually, treasure itself.

And as an aside, none of this type of conversation research is available using just software. These conversations about Dirt Devil were filtered and coded by hand. (Okay, by hands on a computer.) But you don’t get use cases for products as an output of automatic or algorithmic topic surfacing.

Are you ready to do some product innovation research with CRI? Drop us a line. We’d be glad to chat.

January 5, 2017

How Audience Index Can Produce Insights in Conversation Research

It’s one thing to know what percentage of a given audience is male-female, different ages, ethnicities and so on. It’s another to understand how that audience compares to the norm. Indexing a given set of results against a generally understood or accepted point of reference not only frames the context of that audience characteristic, but can help you elevate important insights in conversation research.

Some social listening platforms offer audience indexing in the demographic and psychographic data. This seldom used and often misunderstood statistic is one we constantly refer to at CRI since it can lead to more intimate understanding of the overall make up of a given audience.

To better understand indexing, take a look at this chart on a given audience’s ethnicity. Its primary function is to show the percentage of the audience broken down by ethnicity.

But we’ve also displayed the index compared to the general demographic profile of a commonly used site (in this case, Twitter). We know from multiple resources (Pew, Northeastern University, etc.), in general, Twitter’s audience parallel’s the U.S. population in terms of ethnicity.  Even with some variations considered, at a minimum, we are comparing our audience to an audience of people who are active social media users.

Indexing audiences in Conversation Research

As you can see in this audience, caucasians index at a 1.14 rate. That means that this audience if 14% more likely to be caucasian than the base audience of Twitter users. So it skews white. It is comprised of slightly more African-Americans, 19% less Asian, a bit more less American Indian or Native Islander and “other.”

But look at the Hispanic index. An index of 0.28  means this audience is almost 80 percent less likely to feature Hispanics than the base audience of Twitter users.

What does this tell us? It could tell us a few things:

  • Hispanics aren’t talking about this topic (if you’re doing conversation research) or buying this product (if you’re analyzing sales data)
  • The industry or brand in question does not appeal to Hispanics
  • The industry or brand in question ignores Hispanics

The definitive answer would require more detailed research, but seeing the huge disparity in the indexes gives us reason to investigate and perhaps an opportunity to fuel decisions to improve the business.

And keep in mind that demographics aren’t the only thing that can be compared in index form to Twitter or other data sets. You simply need a known and common data points. In CRI’s research, we frequently surface indexing for age, gender, ethnicity and geography, but also social interests, professions, bio terms and more.

Indexing is a powerful statistical feature to understand as a researcher or a marketer. Understanding it could be the key to unlocking equally as powerful insights for your business.

For help with understanding your audience and how they index compared to known audiences, drop us a line. We’d love to help.

December 22, 2016

How Conversation Research Supports Traditional Advertising

An advertising agency friend recently challenged me that conversation research isn’t relevant to traditional advertising. “We focus on print, radio and TV, so that online stuff isn’t a primary concern.” Yes, I laughed and performed a hearty shaking of my head.

“So when consumers see those advertisements, what do you think they do next?” I replied.

“They either buy or they don’t.”

More furious head shaking.

“No, they go online to research. They talk to friends to see if someone knows more about that brand or has experience with them. They look for validation. In fact, I would argue that the online conversation is more important to purchase consideration than your ad in the first place (though they go hand-in-hand and one isn’t likely without the other).

So he asked me to prove conversation research would support traditional advertising. Even in a three-month-old company, I had a case study.

We were approached recently by a high-end home and lifestyle brand who had some suspicions about their advertising campaign. They didn’t think their messaging around quality and style was really resonating with consumers. Their campaign was developed on assumptions, not assertions and they felt like they’d guessed wrong.

So we analyzed what consumers were saying about their brand — when they turned to the social web to find out more about it — and discovered the brand’s suspicions were correct. The buying decision topics that emerged were almost completely focused on price. There were no (as in zero) conversations discussing quality and style.

Now, the presence of topics (or lack thereof) doesn’t an insight make. Deeper conversation research could yield more understanding of why. Were style and quality assumptions? Did those decision points even matter? Were they simply too high priced for consumers to focus on anything else?

All of those questions made for a great follow-up research project.

The point to my friend was to say that conversation research supports traditional advertising many ways. Some include:

  • Validating assumptions made without adequate consumer research
  • Confirming consumer talking points about the product to focus one’s messaging
  • Discovering tangential topics or qualities resonating with consumers the brand isn’t aware of
  • Uncovering audience segments for better targeting that fall outside the brand designation of its target

And there’s more.

So how’s your advertising doing? Are you happy with the results? Does your messaging resonate with your audience? How do you know?

If you don’t, we can help. Drop us a line and we can chat about how.

November 21, 2016

An Example of Why Social Listening Needs Conversation Analysis

A key value proposition for Conversation Research Institute is that we offer conversation analysis to social listening data that finds the insights that can help drive your business decisions. But that’s not just a fancy set of words, there’s real reason behind it.

First, know that what we mean by offering analysis is that social listening tools themselves aren’t enough. Pretty charts and graphs and word clouds don’t do your business any good if you can’t explain what they mean, how the data was discovered and what insights surfaced that can help you.

Conversation analysis

No social listening software does that for you. You have to have conversation analysis – from a human being – to understand the data and surface that information manually.

Case in point, while working on a research project for an upcoming Conversation Report, we found this entry in a sea of data about the elderly care space:

“The social worker at the nursing home ~ when mom first went there ~ had to go to bat for mom and went to court to get a guardian (not my brother) for mom.”

The software in question gave this entry a neutral sentiment and picked out no sub-topics or themes for the entry. The software surfaced “social worker” “nursing home” and “guardian” as word cloud entries, but again, did not attach any sentiment or context to them.

Because we are manually scoring and analyzing this data, and our perspective is to look at the voice of the consumer as it relates to the elderly care providers (nursing homes, assisted living facilities, independent living communities and other long-term care providers), we add lots more context to the analysis:

  • The sentiment is negative toward the nursing home because the patient needed an advocate
  • The sentiment is positive toward the social worker who served as advocate
  • The source is a family member
  • The theme is patient advocacy
  • A sub-theme is non-family guardianship

And that’s before we went to the original post (which has other excerpts appearing in the research) to clarify even more:

  • The brother in question filed for guardianship after lying for years about having the mother’s power of attorney
  • The social worker was advocating for the patient, but also the rest of the family
  • The author (a daughter of the patient) was considering hiring a lawyer to fight the brother’s claim for guardianship.

So family in-fighting over the burden of care, cost and decision was another important theme.

When you let a computer spit out analysis of tens of thousands, or even millions, of conversations you get roughly one tenth of the context and actual insight possible from truly understanding what is being said. Certainly, on scale there’s no way to be as thorough.

But relying on automatic charts and graphs is keeping you away from the one thing you’re looking for: True consumer insight.

That’s what we surface. If you’re interested in finding it for your brand, let us know.

 

November 16, 2016

Understanding your audience with conversation research

There’s a spirits brand we’re familiar with at the Conversation Research Institute, not because they’re a client, but they’re a favorite for us when we break for a drink at the end of the week. Their marketing is not unlike other sprits brands in their category. It’s focused on tradition, heritage and quality. It’s aimed at men and of a particular status in life.

Honestly, you could take one of about two dozen brands in this category and put them in the same advertisements or even social media posts and, generally, the communications would work.

But we did some snooping around the conversation about the brand and found something interesting. The professions of the people who talk about the brand don’t exactly align with who the brand thinks they’re talking to.

Over the course of two months time, almost one fourth of the authors talking about the brand online listed themselves as artists. While certainly more research needs to be done to determine what type, what gender, how serious and the like, if you are targeting your messaging at male executives, does this data not give you pause?

Yes, 15 percent of the authors talking about the brand fall into the executive label. But the labels of “artist” “teacher” and even “journalist” add up to almost half of the online conversations about your brand, don’t you think segmenting and targeting them could result in more, bigger or better?

Conversation research isn’t just about finding sentiment and tone. It’s about uncovering insights about your brand that help you make critical marketing and business decisions. This particular brand of spirit is missing out on a huge content marketing or even targeting paid spend potential if they aren’t paying attention to the data that conversation research can unearth.

More can be had for your brand. Let us know if we can help.

November 1, 2016

Can Conversation Research tell you why sales are down?

 

A large national retailer in the food and beverage industry was riding high last year. Sales were up, the brand was healthy, consumers were immersed in the experience. Years of hard work had put the brand on the top of the heap in their category.

But then they noticed that sales of certain beverages had started flat lining. They couldn’t quite figure out why. Nothing in their formulas had changed. Customers weren’t indicating why they were switching drinks or passing on the drinks when they ordered. What was the brand to do?

They turned to online conversations and posed the question, “Are sales for these drinks flat lining because of a consumer shift or something else?” Consumers would likely tip their hand if it was the former. If the research was inconclusive, it wasn’t likely because of a consumer need, but something else.

The conversation research for the brand turned over an insight that explained it. The brand’s customers were becoming increasingly concerned about the sugar content of the drinks in question. They were interested in more healthy options.

So the brand formulated a new line of fruit-based, all-natural drinks just in time for spring.

The sugary drink sales stayed flat while the new line took off, exceeding expectations and satisfying customers.

So yes. Conversation research can tell you why sales are down. It may also tell you how to make them go the other direction.

Call us to see how conversation research can help your brand.

October 24, 2016

A peek inside Conversation Research around the travel industry

Understanding how conversation research data can help your business is certainly your first step in knowing what to ask for, who to ask it from and how you might approach discovering insights for your brand. There’s high-level data that points you in a general direction, then specific, granular research that can point to specific insights that help you make decisions.

I recently had the honor of sharing information about conversation research to the audience at TBEX, the world’s premier travel writing and blogging conference, in Manila, Philippines. In preparation for that talk, I recorded a little video to share some of the differences in high-level vs. specific insights with you. I also talk a bit about a specific example of a high-level insight that led to answers at a granular level.

So, what questions do you have about your business or industry that the consumer conversation may answer? I’d be happy to tell you as a response how conversation research may be able to help. Go ahead — the comments are yours!

October 10, 2016

Diet soda buzz is flat, but so are listening standards

As I write this, I’m on day nine without drinking diet soda. This coming from someone who has probably averaged 6-12 cans of soft drink per day since childhood. And no, I’m not exaggerating.

The caffeine withdrawal headaches are gone, but I still don’t like drinking water all the time, though I do feel a bit lighter and healthier, which was the point.

While I jokingly said when I started this process that the sales and marketing teams at Diet Pepsi were in for a rough fall wondering why their Louisville, Ky., volume just disappeared, it seems I may be the least of their concerns.

Engagement Labs released a report last week of soft drink brands that shows a surprising decline in online and offline conversations about diet sodas. Their report claims consumer’s passion for diet soda has “gone flat” but that people are still talking about their love for sugared soft drinks more than ever.

Engagement Labs combines online conversation research, like that I am a part of at the Conversation Research Institute, with person-to-person discussions in focus group form. They combine those two scores into what they call a “TotalSocial” tool and present a baseline score to compare brands.

While all the details of how the score is formulated are certainly proprietary, if you assume all are scored on the same measurement system, the results are intriguing.

 

Coca-Cola is the standard bearer of the soda world, as you would expect, scoring a 50 on the TotalSocial scale. The industry average is around 40. Diet Mountain Dew and Diet Coke are the only two low-calorie options that hit that 40 mark, the rest are below. Diet Pepsi (30), Coke Zero (31) and Diet Dr. Pepper (36) are at or near the bottom of the list.

The main concerns or topics Engagement Labs points to as reasons? Health concerns about sugar and artificial sweeteners, the push for natural ingredients and backlash to recent formula changes by some brands. Engagement Labs offers the opinion that soda brands need to find ways to drive positive consumer engagement for their diet soft drinks the way many do for their sugary brethren.

Of course, Engagement Labs is a marketing company with what looks like a subscription-based measurement tool trying to hook a brand or two as a client, too.

When I see data like this, I’m certainly interested. Looking at how one company, agency or tool ranks and qualifies social media data is always interesting. My skeptic brain kicks in and tries to punch holes in the methodology.

While I don’t know a lot about Engagement Labs’s approach (maybe they’ll chime in and enlighten us in the comments), my skepticism tells me they’re likely using some mass social media scan using a listening platform without appropriate disambiguation. But that’s balanced by the fact that claim to also offer focus group-esque person-to-person interviews. And those require some work and often offer much more valid responses as the questions can be directed.

We don’t really have an industry standard for analyzing and understanding online conversations.

What these reports and surveys typically lead me to, however, is that we don’t really have an industry standard for analyzing and understanding online conversations. Each tool brings in its own volume of online conversations and the volumes never match. NetBase might show 380,000 mentions of a brand while Crimson Hexagon shows 450,000, Brandwatch 145,000 and Radian6 something completely different.

This is why CRI takes a tool agnostic approach. We’d rather assume that sampling enough from each and pulling together an aggregate survey of the online conversation space gives us meaningful data. At least more meaningful than what any one tool offers.

And certainly one that I can defend to clients who won’t then drive me to drink (Diet Pepsi) again.

For more on how the Conversation Research Institute can help you uncover insights about your customers or brand, give us a call or drop us a line.

October 4, 2016

The Achille’s Heel of Social Listening Software

If you use social listening software there’s a good chance you share a frustration with thousands just like you: You can never get the right data. Disambiguating online conversation searches is part Boolean Logic mastery, part linguistics and part voodoo. Or so it seems.

Disambiguation refers to weeding out all the data that comes back in your search query that isn’t relevant. It is a fundamental skill in the practice of conversation research. Type in the brand name “Square” for instance, and you’re going to have a hard time finding anything that talks about Square, the credit card processing app and hardware. Instead, you’ll find a sea of mentions of the word “square” including stories about Times Square, the square root of things and 1950s parents their children referred to as squares.

Disambiguation is a big problem for social listening platforms, yet most of them completely ignore the end user’s need for help. Some have build Boolean logic helpers in their software. Sysomos and Netbase have nice ones. But the only marketing professionals (who this type of software is targeted for) who understand Boolean logic switched majors in college.

What happens when someone who isn’t fluid in Boolean logic searches for conversation topics? You get a lot of results you aren’t interested in. And sadly, most end users of these software platforms don’t know any better. They see results, know they can output a couple charts or graphs for the monthly report and they’re done.

But the results they’re looking at are still littered with irrelevant posts. You can tweak your Boolean string all you want, but you’re likely to come up with something that looks right, but isn’t. And we haven’t even gotten to the Achille’s Heel yet!?!

Case in point: I did a recent brand search for a major consumer company last week. This was a simple brand benchmarking project where I was simply trying to identify all the conversations online that mentioned the brand, then decipher what major topics or themes emerged in those conversations.

My first return from the software was 21,000 conversations. As a reviewed them, I realized there was a lot of spam. After three hours of Boolean revisions, I narrowed the automatic results list to 1,654 conversations. But guess what? While they all were valid mentions of the brand, many of them were job board postings, stock analysis and retweets of news items mentioning the brand. None of these categories — which will likely show up in the automated searches for any sizable brand — are relevant to what the client asked of me: What are the topics of conversation when people talk about us?

So I manually scored the 1,654 conversations, creating categories and sub-categories myself. I also manually scored sentiment for any that made it to the “relevant” list. Here’s what I found:

  • 339 relevant conversations (* — Achille’s Heel coming)
  • 50% were negative; 32% positive and 18% were neutral (compared to the automated read of 92% neutral, 5% positive and 3% negative)

And here’s the Achille’s Heel: (Some topics redacted for client anonymity)

 

Despite manual scoring and categorizing, the majority of results I found were in a category I called “News Reaction.” These were almost all re-tweets of people reacting to a news article, which were removed in my automatic disambiguation process. The client doesn’t care about the news article (for this exercise) but for what consumers are saying.

The Achille’s Hell of Social Listening platforms is they generally do not automatically disambiguate your data well and even when you manually score it, there are reactions and by-products of original posts included that you don’t care about. (There are probably also ones not included that you do, but my guess is those are of less concern if your search terms are set well.)

This is the primary reason conversation research cannot be left to machines alone. For the platforms by themselves will make you believe something that isn’t actually true.

For more on how conversation research can help your brand or agency, give us a call or drop us a line.

 

 

Here’s where deep conversation research comes in. This is a topic chart for a major consumer company’s online conversations for a three month span

Interested in learning more?
Subscribe to our free newsletter and get a free style advice every week. We will also notify You about new offers and discounts. Check it!