No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

4 Reasons to Dump NPS

The business world was quick to embrace Net Promoter Scores (NPS). It gained widespread attention after Fred Reichheld wrote about the concept in 2003, billing the simple number as the single business metric for insights into customer satisfaction, success, and growth. It's not.

The NPS score comes from a single question that a business can pose to its customers: "How likely are you to recommend [COMPANY] to a friend or colleague?" This question supposedly reveals everything there is to know regarding loyalty, growth potential, overall customer satisfaction, and much more. It doesn't.

With NPS, every customer gets classified as a brand promoter, passive, or detractor. Admittedly, it sounds brilliant. Increasing the share of promoters is irrefutable logic, and the ease of measurement makes it even more compelling.

NPS has become so popular that it is often tied to executive compensation. Many companies boast a high NPS, and enterprise communications vendors are no exception. Scores are commonly included in marketing and sales presentations -- and what better reason to invest in customer engagement solutions than to improve one's own NPS?

But we're living in the era when social networks can amplify sentiment. It's too trivial to ask about a customer's willingness to promote a brand. NPS doesn't matter, and here's four reasons why we should all just forget about it.

1. It's Too Good to Be True

Customer satisfaction and loyalty are complex topics. Wouldn't it be great if a single question could reveal the truth about any complex topic? Think of how much time we could save with a one-question medical diagnosis, or a one-question jury trial... or, even something as mundane as house-buying: Would you be happy in this home?

Oddly, complex problems are not easily solved by asking a simple, single question. Customer satisfaction is a multi-dimensional issue. I like the product, but not the retailer. I like the product and retailer, but it's a bad location. I like the product, retailer, and location, but their air conditioner was broken and the heat made me irritable.

NPS advocates say it doesn't matter why, and what matters is whether or not the customer is a brand promoter. Yes, there is some truth to that belief, but it's not particularly helpful. It's like saying that since sales are up, the product must be great. There are lots of reasons why sales may increase, and many of them have nothing to do with satisfaction, so assuming so is unwise.

2. We're Asking the Wrong Question

The problem with the "would you recommend" question is it's an inquiry into the future. NPS is a survey about predicted behavior, which is often different than actual behavior. For example, many people might expect to lose weight in the future, but actual data often reveals a different outcome.

With first-time customers, the future is all you have. But, looking to the future is not as useful for repeat customers. It would be far more meaningful to inquire if they have recommended the company or brand. Customer referrals are a proven form of profitable business growth. This is why so many companies go as far as paying or incentivizing customers to refer friends and colleagues.

It's also important to ask new customers if they were referred or not. In other words, actual actions are more important than future intent and hope.

portable

3. We're Getting Incomprehensible Answers

The question is weak, but the answer is even worse. NPS is not a simple yes/no question. Customers are asked to rate their likelihood of referring a friend or colleague on an 11-point scale of 0-10. Other than zero (not at all likely) and 10 (extremely likely), the scores in between have no textual cues. Everyone supposedly knows the difference between a 6 and 7.

Having eleven options invites a lot of subjectivity. It's very unlikely that people receiving even identical experiences will agree on the same score. With an 11-point scale, one might assume that the difference between two consecutive numbers are insignificant, but that's not quite the case. It makes sense that there's virtually no difference between 3 and 4, or 5 and 6. However, a 6 is hugely different than a 7, and that's dwarfed by the difference between 8 and 9.

NPS defies proven quality metrics. For example, quality metrics favor consistency. Even though some restaurants may occasionally serve a better burger, the establishment that consistently serves an 8 burger is considered the higher quality establishment. However, in NPS, if 100% of the customers give a score of 7 or 8, the NPS is a shameful zero.

This is because NPS treats 0-6 as detractors, 7-8 as passives, and 9-10 as promoters. The scores within these buckets are meaningless -- for example, NPS does not distinguish shifts between 0-6. This brings us to the final point...

4. We're Using Bad Math

NPS defies basic math. Even though it uses a scoring range of 0-10, negative scores are possible and even common. NPS converts scores into detractors, passives, and promoters, and the NPS is determined by subtracting the percentage of detractors from the percentage of promoters, with the logic that more detractors is a bad thing.

This introduces a number of unusual outcomes. For example, if a company improves its raw scores, it's possible that the NPS remains the same. If everyone that answered the referral question with a 5 or lower changed their score to a 6, the NPS would be unaffected.

Think about all the effort necessary for all surveyed customers to change their score from a piss-poor ranking to a 6 -- and yet the NPS metric remains unchanged. This means no celebrations and no bonuses. On the other hand, huge swings could occur if customers changed scores from a 6 to a 7, or from an 8 to a 9.

The NPS methodology converts those 11 points of raw data into three categories, and then does the math. To base customer satisfaction and growth on a single question is flimsy enough, but to actively collect more data and then discard is ludicrous. The mathematical average is more meaningful.

In Defense of NPS

There's really nothing wrong with NPS beyond its inflated value. Any measurement has benefits, and this is a measurement of advocacy. What's concerning is the general lack of awareness around how NPS works, and the over reliance on its significance.

A good NPS often correlates to strong customer loyalty, but cause and effect is less clear. NPS is a data point that becomes much more valuable when combined with additional data. Ideally, a few simple follow-up questions, such as why, can make it much more meaningful.

Also, it's important to understand that NPS is about relationships, not transactions. Specifically, it's about relationships with brands, and not with individuals (or agents). NPS should not be used to rate employee performance.

The true power of NPS is in its simplicity. A low or decreasing NPS should be considered a light on the dashboard, an invitation to investigate what's going on -- nothing more.

Dave Michels is a contributing editor and analyst at TalkingPointz.

Follow Dave Michels on Twitter!
@DaveMichels