NPS Is Not an Effective Way to Measure a Training Program

Follow us on LinkedIn for our latest data and tips!

Kelby Zorgdrager, CEO and Founder of DevelopIntelligence, takes a look at NPS scores and their place in the training space.

First things first. This is not a discussion on how to calculate the ROI for a training program. Nor is it discussion on how to measure training effectiveness or its impact. It’s about evaluating training program quality.

You’re probably not going to agree with my perspective, – and that’s okay. I’m sharing my point of view to hopefully start a dialogue in our community to help elevate the way we all evaluate training programs.

I’ve been in corporate training for nearly 20 years, and many things have changed dramatically for the better. Some, however, have changed for the worse. But one thing that hasn’t changed but needs to is how we measure training program quality.

The Problem with Smiley Faces

Early on, this type of evaluation was all about smile sheets. Literally, evaluation forms had happy, frowning, and flat faces. After a class would end, students would circle the face that best represented how they felt about it. Enough smiley faces, and the instructor and training buyer would say ‘Wow. That was a great class’. Too many flat or frowning faces and the training buyer would say ‘Man. What happened in there?’

Of course, now we know smiley faces are subjective and not an accurate representation of a courses’ success or impact. What if a student had a disruptive peer next to them? What if they were experiencing life – badly – outside of the classroom?  Or, and this is a doozy, what if the course didn’t align to their backgrounds and needs?

Smiley faces were cute, but they were simply too hard to measure. Was it a big smile or a small one? Did a flat face mean ‘I’m indifferent’? Or does it mean ‘I’m passive aggressive and upset but don’t want to let you know.’ Or, was it the age old, ‘I’m an engineer. I don’t ever rate anything above average.’  You get the idea; it’s a flawed measurement tactic. But there are other options.

After smiley sheets lost their street cred, as an industry we started using the Likert scale (1-5) to measure course success. We came up with great questions like, ‘On a scale of 1–5, how would you rate the instructor?’ Or, ‘On a scale of 1–5, how would you rate the course?’ Overtime, we realized that question was also too subjective, so we added a magical phrase to it, ‘the effectiveness.’

We went from subjective, “how would you rate the instructor?’ to objective with ‘how would you rate the effectiveness of the instructor?’ The really great thing about the Likert scale? It’s based on numbers. Numbers create data, statistics, and OMG, metrics. With the Likert scale and two simple words, we could qualitatively measure course effectiveness. It was like, boom. Mind blown.

Of course, instructors, presenters, facilitators stopped patting themselves on the back when they realized that humor, swag and free candy go a long way to positively influence the values on that Likert scale. But then, suddenly, out of nowhere, like social media needing a new cat video to fill it’s viral void, came the Net Promoter Score, or NPS. People rejoiced:

‘It’s amazing!’

‘It’s founded on research and science – this is the way to measure training.’

‘It’s even in survey monkey!’

In the world of corporate training NPS was familiar, as it was based on a question we’ve all become intimate with in the retail world: ‘How likely are you to recommend this?’

It’s a great question. I totally get the reasoning behind it. When speaking of a training course if it scores low too frequently, and attendees aren’t willing to recommend it to their colleagues, maybe it shouldn’t be on the training calendar. Just like a bad appetizer on a menu, if it’s not ordered enough, the restaurant removes it.

My concern is when we evaluate a course solely on a one question, a single marker or data point, even when taken from 16 people, is not enough data to determine course quality. Why? There is no context.

We need context to truly understand what’s going on. What impacted the attendees decision to rate poorly? Was it the instructor? Maybe there was a personality conflict? Was it their background? Did they have the skills needed to ensure the course made an impact? Were they attentive and present? Maybe they had production issues?

You get the idea. These other questions, when structured properly, are important and necessary. They also need to be evaluated against the NPS. Sometimes they’ll support and illustrate the story on why a course has a low NPS. Other times, they’ll show how NPS alone paints the wrong story.

When we only use NPS, or when we weigh it higher than other evaluation questions, NPS becomes the new and slightly improved smiley face.

Does NPS Work as a Training Measure?

NPS was created in 2003 to measure customer satisfaction. It works there. But does it work in corporate training?

Here’s the question learners answer: “How likely is it that you would recommend our company/product/service to a friend or colleague?” It’s measured on a scale between -100 and +100. If a 10 means “extremely likely” to recommend, five means neutral, and zero means “not at all likely,” according to Frederich Reichheld in an old but still valid Harvard Business Review post.

Reichheld said customers with the highest rates of repurchase and referral would give nine or 10 as a rating. Those who were “passively satisfied” rated a seven or eight, and “detractors” would give a score from zero to six. The NPS is the “percentage of promoters minus the percentage of detractors. +50 is considered an excellent score,” according to

But again, how good is it at evaluating training? Why would an organization even want to measure NPS? Well, for one thing, people tend to forget 70 percent of what they learn within 24 hours. NPS can be used before and after training to see how it affects business results and to see how well employees enjoyed the training process.

In the Kirkpatrick Model of Evaluation, an organization can use NPS to measure:

  • Reaction: How did employees feel about the training?
  • Learning: How much do the employees actually remember? Hint: assess before, during and after training, particularly well after the training takes place, for best results.
  • Behavior: Does the training actually affect how employees do their jobs?
  • Results: An organization can use the NPS to determine whether business goals have been met by gathering data on the metrics they want to improve before and after training takes place.

According to APassEducation, “Net Promoter Score is well worth considering as a means to evaluate a training program. Across the board, reliable and insightful metrics are indispensible when it comes to designing effective corporate training.”

Brian Washburn who writes the TrainLikeaChampion blog, wrote that he used to use a five-point Likert scale for his training presentations where he asked if students learned new ideas and concepts. He generally drew a 4 or higher, where 5 is the highest score possible. When his office switched to NPS, he noticed a big difference in the answers to whether learners would recommend the training. He received much lower scores.

“You can’t do something better if you forgot what you learned before you go to bed that evening,” Washburn wrote. “The presenter plays a big role in whether or not people remember the content and are excited to use it when they return to their offices.”

So, for him, using NPS was a way to significantly improve his training delivery: to apply adult learning principles, to make sure he had instructions down, and to stick more closely to the lesson plan. This improved his teaching, improved the course, how well people performed the skills he taught, and that helped determine how well the training helped the company achieve its overall goals.

The Good, the Bad, the Measurement

Of course, there are some other benefits to using NPS as part of training evaluation, especially in the e-learning industry. Administering this survey is relatively easy thanks to its brevity, and it can supplement regular evaluation procedures.

Will Thalheimer of has a slightly different point of view. He said: “[NPS] is one of the stupidest ideas yet for smile sheets.” Worse? These evaluations just don’t provide much information.

Thalheimer has quite a few reasons for not using the NPS as a training evaluation tool:

  • The NPS is designed for customer satisfaction ratings, not for training evaluations. It doesn’t make sense to take something designed for one field and use it in another without having some idea whether or not it would be a useful construct. For instance, one student’s training recommendation doesn’t mean an organization can assume that learner’s recommendation says everything important about a program’s effectiveness.
  • If an organization wants to use NPS for training, it has to believe that learners know if training is effective, and they have to know if those they plan to recommend it to are likely to have the same ideas about training effectiveness as they do.

“The second belief is not worth much, but it is probably what really happens” Thalheimer said. “It is the first belief that is critical, so we should examine that belief in more depth. Are learners likely to be good judges of training effectiveness?”

  • Research shows that students don’t really evaluate their own learning well. For example, they don’t know how much they know and how much they can remember. Plus, traditional smile sheets have little to no correlation between what learners say about their learning experience and what they actually learned.
  • When learners assess their learning right after training, they tend to perform better than they would if they used the same skills on the job down the road.

So, there are other measurements that can provide solid information about training effectiveness. Further, using more than one dimension to measure effectiveness is vital, as it ensures a contextually accurate learning picture.

Thalheimer might add that one cannot forget the importance of training champions when evaluating or promoting program success. He cited a significant body of research that ‘found that one misdirected comment by a team leader can wipe out the full effects of a training program’ If influential people wouldn’t recommend your presentation, research shows that you have a problem.”

In the end, using the NPS is one way to determine whether an organization’s learners would recommend the training to others. But I agree with the experts. It shouldn’t be the only metric used to evaluate training.