Customer Insight is the magazine created by our in-house team of customer research experts. Bringing together features, case studies and the latest thinking on customer experience, it is a must read for anyone interested in using customer insight to improve business performance.

Home > Customer Insight > Customer Experience > Do Customers Like Being Surveyed?

Do Customers Like Being Surveyed?

By Nigel Hill Founder of The Leadership Factor and editor of Customer Insight

If a company that you do business with writes to you, telephones or emails asking you to provide feedback on how satisfied you are with the service you’re getting from them, how do you react? Do you

a. Regard it as totally unreasonable behaviour by the supplier?

b. Feel that it shows they care about how you feel as a customer?

c. See it as a pointless exercise that won’t make any difference?

d. Get really annoyed and threaten never to do business with them again?

The Victor Meldrew view

Some people seem to think the answer is (d). Writing in the December issue of Customer Management1, Dr Tim Snaith, Principal Consultant with technology consultancy Foretel, wrote how it alienates customers when they’re asked to take part in a customer satisfaction interview when they are “putting the kids to bed”, “recovering from a tough day at work” or “enjoying a drama premiere”. Equally offensive to customers, Snaith contends, is being interviewed in the street whilst they “are purposefully going about their lives”, being invited to take part in a web survey, “herding customers into panel groups” or other ways of surveying them. The bottom line of Snaith’s case is that research is foisted on “an increasingly time poor customer” who takes part unwillingly, resents every minute of it and consequently buys less from the company, defects or becomes less loyal regardless of the extent to which that organisation is providing a good product or service that meets or exceeds the customer’s requirements.

The ‘gut feel’ approach

Before we review the objective evidence about how satisfaction surveys affect customers’ future attitudes and behaviours, let’s analyse the argument from the same subjective approach used in the original article. Like Snaith, I regard myself as a fully enrolled member of the time poor club and I can give the grumpiest of grumpy old men a good run for their money. If the phone rings at home in the evening I groan out loud without knowing who’s on the other end, although in my case it’s more likely to be Champions League football than a drama premiere. It’s probably not a market research company because I get very few requests to take part in customer satisfaction surveys. In the past couple of years I’ve been asked to do a telephone interview for my bank and a postal survey for my local council. As it happens, when I was called at around 6pm by my bank’s research agency, I agreed to take part. (Grumpily of course!) Does this prove that people like to be consulted and agree to take part? Of course not. No more than Snaith’s opinions demonstrate the opposite.

As anyone with even a rudimentary knowledge of research methods understands, it is extremely bad practice to generalise from the particular. Country folk knew that centuries ago when they realised that one swallow doesn’t make a summer. Someone else who should know better is Frederick Reichheld, author of ‘The Ultimate Question’2 in which he relates an anecdote about a personal friend who was asked to take part in a customer satisfaction survey shortly after buying a new Jaguar. Despite the fact that it was a very inconvenient time, “she resentfully complied with the survey”. Does this tell us that all Jaguar owners are too timid to ask a market researcher if they can call back to conduct the interview at a more convenient time? Unlikely, no more than it proves that most customers resent customer satisfaction surveys.

Subjectively drawing conclusions about general human attitudes and behaviours from personal experience and beliefs is widespread. Stories abound of top level management decisions based on a comment by the Chairman’s wife, or the CEO’s round of golf with one customer. Such stories are not typically used to illustrate good decisions! Harvard Business School calls this ‘management by gut-feel’ and suggests that ‘management by fact’ is a more sensible approach 3.

Management by fact

In ‘The Service-Profit Chain’, Heskett, Sasser and Schlesinger describe how switching from ‘management by gut-feel’ to ‘management by fact’ led to a surprising but very profitable discovery for Florida Light and Power. FLP had implemented various ‘management by fact’ techniques based on quality improvement techniques used by Komatsu in Japan3 and on the ‘service mapping’ concept developed by Lynn Shostack and Jane Kingman- Brundage4,5. One of FLP’s quality improvement teams was working on the company’s objective to improve customer satisfaction by reducing the number of power cuts caused by lightning – a fairly common occurrence due to the frequency of electrical storms in Florida. The team had made the obvious, but gut-feel assumption that the main cause of the problem was lightning striking power facilities during the storms, but predicting and reducing the tendency of lightning to strike power lines wasn’t easy. When challenged about its lack of progress, the team did a factual root cause analysis, which produced the surprising conclusion that direct lightning strikes were only the third most frequent cause. A much bigger problem was damage caused by vehicles crashing into poles during storms. Further analysis demonstrated that the biggest problem here was poles positioned on the outside edges of bends, especially if they were too close to the road. Within a year, moving poles which had been erected close to bends, had reduced vehicle-related power cuts by 78%, reducing it to only the seventh most common cause of power cuts. As a result of this and other successes, FLP’s Head of Quality Improvement put a sign on his office door saying “In God we trust; all others must bring data.”

The scientific approach

As well as being unscientific, it’s unnecessary to base judgements about how surveys affect customers on subjective opinions because there have been objective academic studies and tests on the subject. Most notably, Paul Dholakia from Houston’s Rice University and Vicki Morwitz at New York University’s Stern School of Business became interested in the many research studies that have shown that surveys have a tendency to engage people, to boost customers’ loyalty and to increase their propensity to buy a company’s product or service. However, they felt that the studies were too restricted, focusing on short term attitude change or one-off behaviour like a single purchase. They determined to understand whether surveys had a more permanent effect on customers’ attitudes and behaviour. To do so, they undertook a field experiment with over 2,000 customers of an American financial services company. One randomly selected group of 945 customers took part in a 10-minute customer satisfaction survey by telephone. The remaining 1,064 customers were not surveyed and acted as the control group. A year later the subsequent behaviour of all the customers in the sampling frame was reviewed, producing the following results6,7:

1. Customer satisfaction surveys make customers more loyal.

The customers who took part in the telephone interview were:

a.More than three times as likely to have opened new accounts.

b.Less than half as likely to have defected.

c.The customers interviewed were more profitable than the control group.

2. Surveys have a long term positive benefit.

Even 12 months later people who had taken part in a ten minute customer satisfaction interview were still opening new accounts at a faster rate and defecting less than customers in the control group.

3. Customers like to be consulted.

The authors conclude that customers value the opportunity to provide feedback, positive or negative, on the organisation’s ability to meet their requirements. Surveys can also heighten respondents’ awareness of a company’s products, services or other benefits, thus also influencing their future behaviour.

Frequency of surveys

These conclusions might tempt some marketers to survey all customers all the time to increase sales! However, as well as being highly dubious ethically, such overkill might just ensure that the Victor Meldrew view of customer satisfaction surveys comes true! It has long been seen as good practice to survey individual customers no more than once a year, a reason why B2B companies with relatively few customers often conduct annual customer satisfaction surveys. To keep employees focused on the importance of customer satisfaction, and to gather earlier feedback on the success of service improvement initiatives, organisations with a large customer base benefit more from continuous tracking with monthly or quarterly reporting of customer satisfaction. Customers sampled to take part in continuous tracking studies would be flagged to ensure they could not be sampled again within 12 months.

Feedback to customers

Research evidence suggests that promising feedback is a very effective way of increasing response rates to a customer satisfaction survey8. This is because whilst customers like to be consulted about the extent to which a supplier is meeting their requirements, they like it even more if the organisation acts on their feedback. But how do they know? In most cases only if the company tells them. The most customer- focused organisations therefore provide feedback to customers on the outcomes of their satisfaction surveys. Feedback typically covers how the survey was done (demonstrating its objectivity and credibility), what they learned from consulting customers and thirdly what actions have been taken as a result of the survey. Feedback can be provided through specially printed leaflets, in customer magazines or newsletters or on the website. As well as demonstrating the organisation’s customer-focus, it also heightens customers’ awareness of improvements, thus accelerating their attitude change and re-inforcing their satisfaction and loyalty.

Objective or biased

When people use subjective and anecdotal evidence to justify their arguments, it may be a disingenuous attempt to further their own self-interested objectives. Frederick Reichheld contends that customers don’t like customer satisfaction surveys but encourages organisations to conduct surveys to generate a Net Promoter Score2. Without providing any empirical evidence, Dr Snaith contends that customer satisfaction surveys “contribute more to customer dissatisfaction than satisfaction1”. It transpires that he works for a company that sells software and consultancy that enables companies to conduct their own customer satisfaction monitoring using techniques such as interactive voice response (IVR), where customers are taken through a sequence of computerised questions. Even Frederick Reichheld doesn’t agree with Dr Snaith on that one, pointing out that when invited to take part in an IVR survey, “Few customers miss the message that the company considers its own time (but not yours) too valuable to waste on surveys.2”


1. Snaith, T (2006) “Undermining customer loyalty”, Customer Management, November – December

2. Reichheld, F (2006) “The Ultimate Question: Driving Good Profits and True Growth”, Harvard Business School Press, Boston, Massachusets

3. Heskett, Sasser and Schlesinger (1997) “The Service-Profit Chain”, Free Press, New York

4. Shostack, G L (1984) “Designing Services that Deliver”, Harvard Business Review, January – February

5. Kingman-Brundage, J (1993) “Service Mapping: Gaining a concrete perspective on service system design” in Scheuing and Christopher (eds) “The Service Quality Handbook”, Amacom, New York

6. Dholakia and Morwitz (2002) “How Surveys Influence Customers”, Harvard Business Review, 80 (5)

7. Dholakia and Morwitz (2002) “The scope and persistence of mere-measurement effects: Evidence from a field study of customer satisfaction measurement.” Journal of Consumer Research. 29 (2)

8. Powers and Alderman (1982) “Feedback as an incentive for responding to a mail questionnaire”, Research in Higher Education, 17

01484 517575
Taylor Hill Mill, Huddersfield HD4 6JA
Twitter LinkedIn