January 10, 2017
The first Conversation Report is due out any day now. Our dive into understanding the buyer journey for the senior care space, which includes nursing homes, assisted living, independent living and more, is coming in at around 15,000 words, over 75 charts and graphs and dozens of insights we’ve synthesized from the data that help senior care brands understand their customers better.
In conducting the research, we had a peculiar challenge. Our goal was to not only find only the posts produced by true customers, but those actively considering senior care options for themselves or a loved one. How do you isolate not just the consumer, but one that is actively looking without knowing first who they are or where they are — both answers you get from the research?
It seems a Holmesian Catch-22.
But it’s not.
All of our broad level research begins by trying to understand the consumer’s conversation habits first. We seek and discover individuals who have been, or are going through, the buying process and interview them. But we don’t necessarily ask all the questions we hope to answer with the research. Instead, we focus on how they go about discovering information about the product or service at hand. We uncover how they talk and think about the topic in terms of lexicon and verbiage. We try and get at what they might say in an online conversation should they resort to social media and online communities to ask questions about the topic at hand.
By canvasing a small focus group on how people talk about buying or shopping for the product in question, we can then produce more accurate search variables to uncover similar conversations on the web. Consider it our social media version of Google’s Keyword Tool. While search terms also contribute to our pool of knowledge and understanding about the audience, people may search for “senior care” but they don’t use that term in a sentence when chatting about the search for a solution online.
As you approach conversation research, you should consider that your assumptions about your audience and how they discuss certain topics in online conversations are biases. You need to vet them properly to get to a more accurate read on what is being said. One misstep in the search variable construction and you could eliminate thousands of relevant conversations. Or, perhaps worse, you could create thousands more to weed through that aren’t relevant at all.
This reinforces something you’ll hear us say over and over at CRI: Social listening software isn’t enough. You have to add the human element to your data gathering mechanism to make sense of all this noise.
How do you go about constructing your searches? We’d love to hear your thoughts and processes in the comments.