Customer Insight is the magazine created by our in-house team of customer research experts. Bringing together features, case studies and the latest thinking on customer experience, it is a must read for anyone interested in using customer insight to improve business performance.

Home > Customer Insight > Latest Thinking > Fast Guide - Setting targets for satisfaction

Fast Guide - Setting targets for satisfaction

By TLF Research

One of the frustrating things about reporting survey results within your organisation can be a hostile reception from colleagues. Critics motivated by defensiveness, bloody-mindedness (though obviously that one doesn't apply to your organisation) or by legitimate concerns often seem to trap the process in pedantic debate, rather than allowing it to progress to what really matters - improving your performance. We've gathered together a selection of the more common difficult questions complete with appropriate responses - feel free to send us any more of your favourites.

But we've only spoken to a small proportion of customers - is that reliable?

Yes. Contrary to popular belief the main factor influencing the reliability of a survey is the absolute size of the sample. Thus a sample of 500 is equally reliable for every business, however many customers thay have. The only reason to interview more people than this is when the results need to be split down in the analysis, which is why, in practice, large organisations often require large samples. When you consider that most published newspaper polls use a sample of around 1,000 to represent the views of the adult population of the UK it should put your own sample size in perspective.

How do we know Mr X means the same thing by "8 out of 10" as Ms Y?

We don't, and note that the same criticism applies to verbal scale points such as "very satisfied" as well. This is one of the reasons why it would be unwise to draw too many conclusions about individuals based on their survey scores. However by aggregating these scores across a large sample of respondents we can be confident that these variations in scoring are absorbed as random error, which will have a negligible effect when looking at the average across respondents.

The only time this concern has real legitimacy is when comparing scores across countries. There is evidence that there are systematic cultural differences in the way people in different countries use scales. This must be controlled for before comparing between countries.

But some people will never give a 10, so that must distort the scores

This is true, although it is rarer than many people suppose. In many respects the answer to this is the same as the question above - averaged across a large sample the odd (odd being the operative word) individual like this will be cancelled out by other respondents.

How do we know the results are reliable?

Based on well-established statistical principles we can be confident that, with a random sample and a high response rate, survey estimates are accurate within known limits. Based on a sample size of 400 customers these limits will be comparatively small (plus or minus 0.1 on a typical 10 point scale score - in other words with a mean of 7.4 we would be 95% sure the real score was in the range 7.3–7.5)

You've worked out our satisfaction index one way, I think this other way is better

The way the index is calculated is far less important than using it to look at differences between branches/business units/regions and tracking improvements over time. If your index is sensitive enough to show these differences effectively then it's doing its job, however it may be put together. Composite indices are, in general, preferable to single question measures because of measurement issues.

Is this response rate high enough for us to trust the results?

This is the toughest, and most legitimate, question of the lot. In many cases response rates are not as high as we would like them to be. It is often not a viable option to spend the kind of money it might take to get a high response rate. It might involve switching to telephone or even face to face interviews rather than a self-completion questionnaire, and that is not always an option.

As long as your response rate is respectable, by which I mean over 20%, then most organisations would be happy to act on the results. Theoretically there is opportunity here for bias, but it is unlikely to change the action you should take. In other words the headline score may be distorted, but the priorities for improvement you choose are likely to reflect the priorities of those customers who have not responded as well as those that have.

01484 517575
Taylor Hill Mill, Huddersfield HD4 6JA
Twitter LinkedIn
...