AI-generated (stable diffusion) ge of "cyclon writing with a pen".

The sporadic blog of David J A Cooper. I write sci-fi, teach software engineering, and occasionally say related (or not related) things.

Check out Nova Sapiens: The Believers.

Measuring the teaching vibe with Net Promoter Scores (or not)

Recall being asked “How likely you would be to recommend…” something, on a scale from 0-10. Your response (combined with others) goes into creating a Net Promoter Score (NPS) for whatever you were asked about.

NPS has become a commonplace tool for gauging public opinion of a product or service. But is it the right tool to gauge student satisfaction with university teaching practice?

This has been proposed, and I entirely understand why, and believe it to be entirely in good faith, and concede that there are studies that appear to support it at some level.

But I can’t use NPS myself, because a deep analysis of it leads me to confusion and despair. I feel one cannot provide an honest answer to an NPS question, because the different layers of meaning are contradictory. You can answer truthfully to the literal meaning of the question, or you can answer in a way that results in your opinion being accurately reflected in the final analysis. You need different answers to do this!

A key detail: NPS categorises respondents as “promoters” (those who answer 9 or 10), “neutrals” (7 or 8) and “detractors” (6 or below). An actual net promoter score is the percentage of promoters, minus the percentage of detractors (ignoring the neutrals).

Thus, we don’t ask for what we actually want to know (assuming respondents wouldn’t directly admit to being “promoters”, “neutrals” or “detractors”), and the mapping seems wildly at odds with the literal wording of the question.

Given the question “How likely would you be to recommend TheThing (on a scale from 0-10)?”, an honest answer of 5/10 seems to imply a 50% probability of at least one recommendation of TheThing to some other person. Thus, if N independent people answered 5/10, then the expected number of people making recommendations would be N/2, which is most certainly a positive number, and seemingly a decent outcome. Even if 10 honest people all answered 1/10, logically, on average, one of them will make a recommendation, which is still good on balance.

So where do the “detractors” come in? We haven’t actually asked anything about what negative actions a respondent may take.

As a respondent, I might find TheThing quite praiseworthy, while at the same time estimating that there’s only a 2/10 chance I’ll get around to recommending it to someone. I’m just not the sort of person who goes around recommending things. I have a lot on my mind, and I may simply not remember TheThing later on. And during a conversation, I don’t want to pester people by talking about things I don’t think they’d be interested in. Nonetheless, my 2/10 chance logically means TheThing is worthy of recommendation, in my view. I don’t feel I’ve declared myself to be a detractor, which would require the chance of recommendation to be 0/10.

I realise NPS is calibrated along other lines. It’s common knowledge that, in a “rating out of 10”, low numbers mean condemnation. People who “rate” something 2/10 or even 5/10 tend to dislike it and perhaps recommend against it. In one sense, adopting this interpretation for NPS is pragmatic, but in another sense it feels dishonest. The actual NPS question doesn’t ask respondents to simply “rate” something out of 10. It specifically asks for a “likelihood” of “recommendation”, which makes it a prediction (albeit a loose one), and a prediction only of positive outcomes. It could be influenced by factors beyond merit, and there is no “detracting” answer, technically. (A detractor would give a recommendation probability of 0/10, but so would someone who simply intends to stay out of the discussion.)

I also find it notable that NPS asks for and then throws away a significant part of the information provided. It asks you to choose between 11 possible responses, when it only needs and uses 3 categories. This just seems rude.

I could be accused of pedantry, perhaps. However, I am ultimately trying to make a point about the use of NPS in universities. Universities are full of people essentially trained (or being trained) in pedantry, for whom the precise meaning of both words and probability values is rather important. Moreover, 5/10 in a university setting is widely recognised as a “pass” mark, and 6/10 a “credit” (answers that would mark you out as an NPS “detractor”), which introduces yet another layer of confounding interpretation.

Thus, I doubt the meaningfulness of NPS in universities. Since I find the conflicting meanings personally irritating (and I avoid answering NPS questions myself whenever possible), I can’t in good conscience inflict NPS on others.