Our Thoughts

Accuracy matters more than precision

16th April 2019

How many decimal places does your organisation use in its reports? I bet it's too many.

When they measure, organisations like to pretend that they are adopting the methods and the rigour of the hard sciences, a phenomenon known as "physics envy".

Spurious precision is one example of this. After all, a Net Promoter Score (NPS) of 21.3 sounds much more important and scientific than one of 21 doesn't it? The truth is that, taking into account the margin of error of most NPS surveys, even describing the score as "about 20" is probably overstating our real confidence in the result.

We can add as many decimal places as we like, but it does nothing to improve the fundamental quality of the numbers we report. If you're really interested in being rigorous then you need to think more carefully about three aspects of measurement which affect the accuracy of any data.

Sampling error
All survey data, and many other sources of data within organisations, are subject to random sampling error. The amazing thing about statistics is that we can estimate how big that error is likely to be, based on sample size and the variability of the data we've gathered. Confidence intervals show you the margin of error around the numbers you quote, and help to keep you honest if you're tempted to report too many decimal places.

Sampling error comes from the fact that we haven't spoken to everyone, and that a different sample would have given a (slightly) different result.

But that's not our only source of error. We also need to think about the issues of precision and accuracy, as illustrated in the graphic below.

bias

Precision (variable measurement error)
Each individual's score comes with its own random measurement error. Perhaps they slightly misunderstand the question, perhaps they're in a bad mood that day, or perhaps you typed the wrong score in. Because they're random, in a large enough sample these errors should cancel each other out, but they do tend to make results more volatile.

Random measurement error is inevitable, but if we can improve reliability our metrics will be less volatile and more precise.

Accuracy (bias or constant measurement error)
Most serious, and most difficult to deal with, are any problems we may have with accuracy (or validity in statistical terms). Unlike sampling error and precision, there is no easy way to measure and assess all potential forms of bias.

A short list of some potential dangers:

  • Non-response bias (would the people who don't choose to take part score differently from those who do?)
  • Method effects (do people give more positive scores in an interview than for self-completion methods?)
  • Leading questions (is the question wording likely to lead people towards a positive answer?)
  • Biased scales (do the response options skew towards a positive answer?)

And these are just some of the more obvious ways you can go wrong.

The take away
If all of this stats talk seems a bit off-putting, let's put it in easy terms. Accuracy is more important than precision, and much more important than the bogus precision you get by using decimal places you haven't really earned.

To quote the excellent '101 Things I Learned in Engineering School':

Accuracy is the absence of error; precision is the level of detail. Effective problem solving requires always being accurate, but being only as precise as is helpful at a given stage of problem solving. Early in the problem-solving process, accurate but imprecise methods, rather than very exact methods, will facilitate design explorations while minimizing the tracking of needlessly detailed data.”

Image credit: s58y [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

01484 517575
Taylor Hill Mill, Huddersfield HD4 6JA
Twitter LinkedIn
...