Get your team started on a custom learning journey today!
Our Boulder, CO-based learning experts are ready to help!
Get your team started on a custom learning journey today!
Our Boulder, CO-based learning experts are ready to help!
Follow us on LinkedIn for our latest data and tips!
Kelby Zorgdrager, CEO and Founder of DevelopIntelligence, takes a look at NPS scores and their place in the training space.
First things first. This is not a discussion on how to calculate the ROI for a training program. Nor is it discussion on how to measure training effectiveness or its impact. It’s about evaluating training program quality.
You’re probably not going to agree with my perspective, – and that’s okay. I’m sharing my point of view to hopefully start a dialogue in our community to help elevate the way we all evaluate training programs.
I’ve been in corporate training for nearly 20 years, and many things have changed dramatically for the better. Some, however, have changed for the worse. But one thing that hasn’t changed but needs to is how we measure training program quality.
Early on, this type of evaluation was all about smile sheets. Literally, evaluation forms had happy, frowning, and flat faces. After a class would end, students would circle the face that best represented how they felt about it. Enough smiley faces, and the instructor and training buyer would say ‘Wow. That was a great class’. Too many flat or frowning faces and the training buyer would say ‘Man. What happened in there?’
Of course, now we know smiley faces are subjective and not an accurate representation of a courses’ success or impact. What if a student had a disruptive peer next to them? What if they were experiencing life – badly – outside of the classroom? Or, and this is a doozy, what if the course didn’t align to their backgrounds and needs?
Smiley faces were cute, but they were simply too hard to measure. Was it a big smile or a small one? Did a flat face mean ‘I’m indifferent’? Or does it mean ‘I’m passive aggressive and upset but don’t want to let you know.’ Or, was it the age old, ‘I’m an engineer. I don’t ever rate anything above average.’ You get the idea; it’s a flawed measurement tactic. But there are other options.
After smiley sheets lost their street cred, as an industry we started using the Likert scale (1-5) to measure course success. We came up with great questions like, ‘On a scale of 1–5, how would you rate the instructor?’ Or, ‘On a scale of 1–5, how would you rate the course?’ Overtime, we realized that question was also too subjective, so we added a magical phrase to it, ‘the effectiveness.’
We went from subjective, “how would you rate the instructor?’ to objective with ‘how would you rate the effectiveness of the instructor?’ The really great thing about the Likert scale? It’s based on numbers. Numbers create data, statistics, and OMG, metrics. With the Likert scale and two simple words, we could qualitatively measure course effectiveness. It was like, boom. Mind blown.
Of course, instructors, presenters, facilitators stopped patting themselves on the back when they realized that humor, swag and free candy go a long way to positively influence the values on that Likert scale. But then, suddenly, out of nowhere, like social media needing a new cat video to fill it’s viral void, came the Net Promoter Score, or NPS. People rejoiced:
‘It’s amazing!’
‘It’s founded on research and science – this is the way to measure training.’
‘It’s even in survey monkey!’
In the world of corporate training NPS was familiar, as it was based on a question we’ve all become intimate with in the retail world: ‘How likely are you to recommend this?’
It’s a great question. I totally get the reasoning behind it. When speaking of a training course if it scores low too frequently, and attendees aren’t willing to recommend it to their colleagues, maybe it shouldn’t be on the training calendar. Just like a bad appetizer on a menu, if it’s not ordered enough, the restaurant removes it.
My concern is when we evaluate a course solely on a one question, a single marker or data point, even when taken from 16 people, is not enough data to determine course quality. Why? There is no context.
We need context to truly understand what’s going on. What impacted the attendees decision to rate poorly? Was it the instructor? Maybe there was a personality conflict? Was it their background? Did they have the skills needed to ensure the course made an impact? Were they attentive and present? Maybe they had production issues?
You get the idea. These other questions, when structured properly, are important and necessary. They also need to be evaluated against the NPS. Sometimes they’ll support and illustrate the story on why a course has a low NPS. Other times, they’ll show how NPS alone paints the wrong story.
When we only use NPS, or when we weigh it higher than other evaluation questions, NPS becomes the new and slightly improved smiley face.
NPS was created in 2003 to measure customer satisfaction. It works there. But does it work in corporate training?
Here’s the question learners answer: “How likely is it that you would recommend our company/product/service to a friend or colleague?” It’s measured on a scale between -100 and +100. If a 10 means “extremely likely” to recommend, five means neutral, and zero means “not at all likely,” according to Frederich Reichheld in an old but still valid Harvard Business Review post.
Reichheld said customers with the highest rates of repurchase and referral would give nine or 10 as a rating. Those who were “passively satisfied” rated a seven or eight, and “detractors” would give a score from zero to six. The NPS is the “percentage of promoters minus the percentage of detractors. +50 is considered an excellent score,” according to APassEducation.com.
But again, how good is it at evaluating training? Why would an organization even want to measure NPS? Well, for one thing, people tend to forget 70 percent of what they learn within 24 hours. NPS can be used before and after training to see how it affects business results and to see how well employees enjoyed the training process.
In the Kirkpatrick Model of Evaluation, an organization can use NPS to measure:
According to APassEducation, “Net Promoter Score is well worth considering as a means to evaluate a training program. Across the board, reliable and insightful metrics are indispensible when it comes to designing effective corporate training.”
Brian Washburn who writes the TrainLikeaChampion blog, wrote that he used to use a five-point Likert scale for his training presentations where he asked if students learned new ideas and concepts. He generally drew a 4 or higher, where 5 is the highest score possible. When his office switched to NPS, he noticed a big difference in the answers to whether learners would recommend the training. He received much lower scores.
“You can’t do something better if you forgot what you learned before you go to bed that evening,” Washburn wrote. “The presenter plays a big role in whether or not people remember the content and are excited to use it when they return to their offices.”
So, for him, using NPS was a way to significantly improve his training delivery: to apply adult learning principles, to make sure he had instructions down, and to stick more closely to the lesson plan. This improved his teaching, improved the course, how well people performed the skills he taught, and that helped determine how well the training helped the company achieve its overall goals.
Of course, there are some other benefits to using NPS as part of training evaluation, especially in the e-learning industry. Administering this survey is relatively easy thanks to its brevity, and it can supplement regular evaluation procedures.
Will Thalheimer of WillatWorkLearning.com has a slightly different point of view. He said: “[NPS] is one of the stupidest ideas yet for smile sheets.” Worse? These evaluations just don’t provide much information.
Thalheimer has quite a few reasons for not using the NPS as a training evaluation tool:
“The second belief is not worth much, but it is probably what really happens” Thalheimer said. “It is the first belief that is critical, so we should examine that belief in more depth. Are learners likely to be good judges of training effectiveness?”
So, there are other measurements that can provide solid information about training effectiveness. Further, using more than one dimension to measure effectiveness is vital, as it ensures a contextually accurate learning picture.
Thalheimer might add that one cannot forget the importance of training champions when evaluating or promoting program success. He cited a significant body of research that ‘found that one misdirected comment by a team leader can wipe out the full effects of a training program’ If influential people wouldn’t recommend your presentation, research shows that you have a problem.”
In the end, using the NPS is one way to determine whether an organization’s learners would recommend the training to others. But I agree with the experts. It shouldn’t be the only metric used to evaluate training.
Customized Technical Learning Solutions to Help Attract and Retain Talented Developers
Let DI help you design solutions to onboard, upskill or reskill your software development organization. Fully customized. 100% guaranteed.
DevelopIntelligence leads technical and software development learning programs for Fortune 500 companies. We provide learning solutions for hundreds of thousands of engineers for over 250 global brands.
“I appreciated the instructor’s technique of writing live code examples rather than using fixed slide decks to present the material.”
VMwareDevelopIntelligence has been in the technical/software development learning and training industry for nearly 20 years. We’ve provided learning solutions to more than 48,000 engineers, across 220 organizations worldwide.
Thank you for everyone who joined us this past year to hear about our proven methods of attracting and retaining tech talent.
© 2013 - 2022 DevelopIntelligence LLC - Privacy Policy
Let's review your current tech training programs and we'll help you baseline your success against some of our big industry partners. In this 30-minute meeting, we'll share our data/insights on what's working and what's not.
Training Journal sat down with our CEO for his thoughts on what’s working, and what’s not working.