Social Response: It’s About More Than Sentiment

As businesses adopt social response processes – formalized methods for listening, responding, and tracking customer posts on the social web – the role of “sentiment” still sits oddly in the center of the typical response program. I say “oddly” because sentiment per se is at best a secondary factor in understanding and managing conversations in general. Used as a cornerstone for a social response program, it suggests most brands still don’t see customer-generated social media as important beyond PR and directional marketing trends. It is.

The truth is, customers talk about everything on the social web, and that includes talking about and directly with the brands that they buy from or have heard about. Some of this conversation has meaningful sentiment: posts like: “Thank you, I love what you did!” are certainly positive while: “This sucks, once again it doesn’t work. #FAIL” is obviously negative. Look past the obvious, though, and sentiment fades as an indicator of what matters on the social web.

Fred Reichheld expressed perfectly in 2005: the most important single takeaway from an interaction between a firm and a customer is, “As a result of this interaction, how likely are you to recommend me?” Consider the social feedback cycle, and the impact of “highly likely” vs. passive (“It’s OK, but…”) vs. a recommendation against your brand. “Highly likely to recommend” literally amplifies your spend while recommendations against require additional spend just to maintain parity, and that’s a real drag on your bottom line.

Reenter sentiment. More than a few brands triage incoming posts: “If positive, route to marketing for follow-up; if negative, route to customer care for resolution.” Seems logical, and it’s certainly easy to implement. The problem with this approach is, however, two-fold. First, marketing never sees the gritty detail of real life and what goes wrong. From that point of view, ads become believable, set in a land where cellphones work on subways (most don’t) and where eating a triple cheeseburger while downing a 64 oz soft drink for lunch won’t eventually kill you (it will). Meanwhile, over in customer care, agents are steadily beaten down by piece-count-based productivity metrics set to the roar of the river of customer attacks. Having spent time in customer care as an agent I can state as fact that what too many agents see far too often would result in a mouth-washing if the sender’s mother saw it. That’s just wrong. But it happens every day.

The result is an out-of-touch escalation between the TV ads that set expectations and the reality of customer experiences. My wife was shocked to learn that ads don’t have to be completely truthful, though like most of us she long ago assumed the standard was even lower – as in z-e-r-o – for political ads. It’s no wonder then that we react with a certain negativity when mileage isn’t what we expected. It’s a fact, for example, that most gasoline contains Ethanol, while EPA testing is still done with 100 percent gasoline. Your mileage will absolutely vary – negatively – as a direct result. That blazing broadband service promised? It’s often inferior even to old dial-up: I measured 2 kbps at SFO recently. It’s no wonder consumers are snarky on Twitter (not that it helps).

Given all of that, what’s the real value of sentiment analysis? At best, it’s a trendable measure of brand satisfaction; though, again, being “happy” and being “highly likely to recommend” are two different things. To be sure, an overall “positive” conversation is a good thing, and so measuring happiness makes a certain amount of sense. But I still have this uneasy feeling: when the conversation is upbeat, is it because everyone is happy, or because, like most of us, people are prone to being nice? Is my happy index going to withstand an ice-storm-driven outage that disrupts my customer deliveries for a week?

It’s in this context that sentiment actually gains a footing. Where sentiment does matter is in the ability of a brand to withstand an adverse event, or to gain from a new product launch. Simply put, we are more likely to forgive those we love than those we hate. Likewise, we are much more likely to try something new from a brand we are passionate about than one we are not. On this front, take a look at Satmetrix’ SparkScore: it’s a combination of sentiment and Net Promoter Score (NPS) that provides an indication of sentiment, brand promotion, and brand health, answering questions like, “How likely are you to recommend me in the face of an adverse event?” Get high marks on that count and you’re on solid ground.

That last scenario – “How likely are you to recommend me in the face of an adverse event?” – is “real world.” Stuff happens. The world isn’t always perfect and things don’t always go as planned. But strong brands seem to take it in stride because their customers take it in stride. I’ve seen it personally while flying: I’ve gone to bat for United (formerly Continental, where I was Executive Platinum) and saw the same again recently when a lengthy delay (eight hours) on JetBlue leaving San Francisco recently failed to incite a riot precisely because the airline was proactive in addressing the situation, and because the underlying attitude toward the brand (think NPS here) was strongly biased toward “promoter.” When Apple messes up, plenty of negative conversation happens. But the brand survives and sales remain strong as the advocates come out. Dell gets beat up on the surface, but take a look at the customer support forums: customers contribute hugely toward the task of answering questions, providing a vibrant community in which the brand thrives. We excuse and rise up to support those we love.

OK, back to your response program and why understanding sentiment is not the single key to success. Your response program – which intrinsically means one-to-one issue resolution at scale – is at its best when it is building brand advocates. Customer care – which is where your response program is heading – demands a higher standard. Instead of making people happy, the objective of the response program should be to encourage people to recommend you. Don’t believe it? Try this simple thought experiment: split a team of CSRs into two groups and tell one to make the customer happy. Tell the other to resolve the issue and to do so in a way that results in a positive recommendation.

The first group will quickly realize that it can achieve its goal by giving products or services away for free: “No worries…we’re removing the charges for that” or “I’ve added two months of service free to your contract.” Customers may be happy, but your margins will quickly deteriorate. Weak margins reduce your ability to delight customers. It’s a downward spiral.

Over in the second group, though, things look different. We don’t recommend unless we are sure about actual performance: social capital is too important to each of us. We recommend based on satisfaction, not two free months of the same bad service. This group of agents is working alongside customers, so issues are being addressed and corrective remedies are being escalated and reported to product management. As a result customers are recommending the brand, because it is listening, responding, and improving. The NPS of the brand is rising. Sentiment may still be “negative” but the brand is strengthening itself while protecting its margins. The result? Customers are more likely to advocate for the brand, and that drives sales and builds margins.

In summary, when you build your response program understand that positive sentiment isn’t an end-goal: it’s a byproduct of your ability to delight customers while maintaining a healthy business. Focus on resolving issues and building a healthy business. Instead of sentiment, step up to a higher standard: the likelihood to recommend.

Related reading