What's Wrong With the Net Promoter Score
Three reasons why the Net Promoter Score is a waste of time.
Three reasons why the Net Promoter Score is a waste of time.
There are so many things wrong with Net Promoter, where do I begin? Perhaps let’s start with a simple list. Net Promoter doesn’t tell me anything new; it’s based on flawed math; and it’s not actionable.
No New Insight, Let Alone Predictive Power
The Net Promoter score (NPS) is calculated by subtracting the percentage of detractors from the percentage of promoters. But is it net-promoting of a product, service, company, brand, etc.? Well, it depends on how the survey is worded. If it’s worded as “How likely is it that you would you recommend [Company X] to a friend or colleague?” it boils everything down to the likelihood of word-of-mouth to occur for the company. But customers buy specific products or services, not companies; and they generally recommend specific products or services, not companies. A quick scan of tweets, forum posts, blog entries, or Facebook statuses will show that people talk about and tell others about products or services — e.g., “must have” boots this season, the digital camera they just bought, etc. Rarely do people recommend entire companies, unless the company has consistently gotten every dimension of its offering right — like Apple.
Net Promoter doesn’t reveal anything about the company’s product line, product innovations, accuracy of pricing, or operating efficiency, all of which would be more direct and accurate indicators of future sales or growth. For example, even if the company has a high NPS, if it grossly overpriced a particular product, no one would buy that product. What does NPS tell you about Apple (manufacturer) versus Amazon (retailer) versus Time Warner Cable (utility)? Practically nothing that isn’t absurdly obvious — Apple has well designed, cool products, and people will buy them; Amazon has great customer service and very useful site features, and people will shop online there; Time Warner, well, need I say more? So, the NPS is what I call an “it is what it is” metric — it tells you the obvious, isn’t predictive in any way, and doesn’t answer the “So what?” question.
The Net Promoter score isn’t even a number that can stand alone — it has no meaning when taken by itself. Everyone asks, what’s a good NPS? Well, there’s no answer because you have to look at NPS scores relative to other NPS scores in the same industry. You can’t compare NPS across industries or product categories because some products simply don’t lend themselves to word-of-mouth (e.g., toilet paper, toothpaste, laundry detergent, copy paper, etc.), while others could evoke extreme passion and sharing (e.g., fashion, restaurants, etc.). NPS also ignores the fact that the voices and reach of promoters or detractors can be drastically different depending on the digital channels they have at their disposal. There may be very vocal detractors who go online and write negative reviews. That would outweigh a non-vocal group of promoters, even if the company had a positive and high NPS.
Based on Bad Math
Net Promoter score is also based on a seemingly arbitrary 11 point scale — 0 through 10 — where 0 through 6 are detractors, 7 through 8 are passive, and 9 through 10 are promoters. Why not a symmetrical scale like 0 through 4, 5, and 6 through 10? Or something else entirely? In the words of other critics, NPS is unipolar in the way the seminal question is phrased “How likely are you to recommend?” while bipolar in how it’s applied — detractors versus promoters. A 0 through 6 means “not likely at all to recommend,” which is very different than someone actually detracting or stating negative attributes and why. The application of NPS is inconsistent with the scale used, and almost begs a different scale like -3, -2, -1, 0, +1, +2, +3, which indicates “highly likely to recommend against,” through “neutral,” to “highly likely to recommend for,” respectively.
“NPS is attitudinal rather than behavioral, measuring how many people say they would be likely to recommend, rather than how many are doing so. A large body of research indicates that claimed intention is a better reflection of present attitudes than future behavior,” according to this post on the blog for Vovici, a company that offers an application for monitoring and reacting to feedback.
Within the same industry or product vertical, NPS is problematic because there are many possible ways to arrive at the same number — for detractors, passive, and promoters respectively, we can arrive at an NPS of 20 in dozens of ways. A company with a 20 NPS could have 20 percent promoters, 80 percent passives, and 0 percent detractors, while another company with a 20 NPS could have 60, 0, and 40. One could easily argue that the company that had 20 percent promoters and 0 percent detractors is very different than the one with a polarized customer base with 60 percent promoters versus 40 percent detractors — even though their NPS is seemingly the same.
More importantly, NPS is not actionable. What if you had a good one? What does that mean? Perhaps, according to examples cited on Net Promoter.com, in the telecommunications industry, an 11 percent (AT&T) is the best of the bunch. Is it the best because of the technology, the service, the wireless service, or the cable? AT&T is the company; what product or service is actually worth recommending, if any at all? We simply don’t know the “So what?” What if you had a bad NPS? What do you do? Is it a particular underperforming or unprofitable product or service that should be cut from the product mix? Is there a weakness in customer service or operational inefficiency that can be improved?
If a metric is just an “it is what it is” number, has no predictive power, can’t be used alone, and doesn’t give you clues about what to do — throw it out. It’s synonymous with useless.
Use Changes in Search Volume to Gauge Success
So, if you can’t use NPS, what do you use? “Likeliness to recommend” is fine and directionally ok, but doesn’t necessarily correspond to “likeliness to buy” — which is what we need as marketers to know if a marketing program is successful or going to be successful.
Lift in search volume is a better indicator of whether marketing programs will be successful in driving sales. Specifically, if marketing programs drive lift in search volume for the company, the brand, or the specific product, we’ll have clues about which stage of the purchase funnel the customer is likely to be in. And the fact that they’re searching for additional information means they not only saw the ad and remembered it, but they also found it relevant and timely enough to take action — seeking additional information.
If consumers are searching broadly about a company (versus specifically about a product), we know they’re far earlier in the research cycle and higher up in the awareness stages of the funnel. However, as consumers get closer to the purchase, they’ll be searching much more specifically about products — i.e., the several candidates they’re considering. And their searches may focus on differences between the products and prices. In general, the more specific or detailed the search, the closer to the purchase the customer is in time and in propensity (i.e., further down the funnel).
Also, knowing exactly what customers are searching for — like their “missing links” — tells the marketer exactly what actions need to be taken. For example, if users are searching for “What is motor oil?” the marketer can direct them to the right content on a Web site or create such content. If users are searching for “Where do I change my motor oil?” the marketer can direct them to a store locator. If users are searching for “Which motor oil is best for my car in sub-zero winter conditions?” the marketer can direct them to user reviews or forum posts.
So, what’s a good metric? It’s one that yields new insights, is based on user actions not opinions, and is actionable.