Everyone Knows NPS Doesn’t Measure Learning.
People Use It Anyway.
If you’ve ever evaluated an online learning program, there’s a good chance you’ve used Net Promoter Score (NPS). You’ve probably also wondered whether it’s actually telling you anything.
You’re right to wonder. NPS isn’t just an imperfect tool for measuring learning. It’s measuring the wrong thing entirely.*
What NPS Was Built For
Net Promoter Score was designed for consumer products and services. Hotels. Airlines. Software subscriptions. The question it’s built around (”How likely are you to recommend us?”) makes sense in those contexts: reduce friction, increase satisfaction, get people to come back and bring friends.
That logic works when a frictionless experience is the point.
The Problem: Learning Requires Friction
In consumer products, friction is a bug. In learning, friction is often the whole point. (h/t Matt Tower who first introduced me to this framing.) Think back to the class you liked most growing up. Was it the easiest? Or was it the teacher who really pushed you into new ways of thinking?
The research is consistent. The techniques that produce the most durable learning (like retrieval practice, spaced repetition, challenging assessments) feel harder, not easier. But they work better.
This creates a direct conflict with what NPS rewards. A learner who struggled through a rigorous program and came out with real skills will rate it differently than one who coasted through something well-produced and retained almost nothing. NPS often favors the second one.
So why do people use it? Because it’s an industry standard, leadership already knows what it is, and it’s a concrete number. Nobody gets fired for using NPS. But when it’s your primary signal, it creates a quiet incentive to make programs feel good rather than work well.
What to Measure Instead
Admittedly, better metrics are harder to collect. That’s exactly why NPS fills the vacuum. But if you want to know whether a program is working, the questions worth asking are:
Behavior change. Are learners doing something differently?
Application. Did the learning transfer to a real situation?
Knowledge retention. Do they still know it 30 days later?
Completion rates aren’t sufficient either (I’ll have more to say about that in the future), but they’re more honest than satisfaction scores. Finishing is closer to the actual goal than recommending.
The Bottom Line
Impact is your goal. Measurement, properly done, shows you that impact. But it is a tough problem. Instead of tacking on an NPS survey at the end and celebrating that you did better than your cable provider, think from the outset about what success looks like, and how you might measure it.
* I made it all the way to post 2 before dipping into “X: You’re Doing It Wrong.” I’m sorry and you’re welcome.
![Thinking [Good]](https://substackcdn.com/image/fetch/$s_!6wiX!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbec9953e-30d8-4c4f-93da-429bc33233ec_1421x1421.jpeg)
