Ask us Anything - Live Zoom Session - Questions and Answers

2 November 2021

We enjoyed our live Zoom question and answer session on 2nd November 2021. One thing we’ve missed over the last 18 months is getting out and speaking to likeminded people about customer experience and the challenges faced when undertaking research.

This was the perfect opportunity to do just that and we had some great questions from attendees. The session was hosted by Stephen Hampshire and below is a selection of the answers to the questions asked on the day and submitted in advance.


Q: We’ve started asking customers about importance, but this seems to reflect specific problems on the day (e.g. exiting the car park). It also seems not to relate to the things customers asked us for previously, such as providing WiFi.

You need to be clear on the difference between what’s important to customers, what makes a difference to customers, and what’s easy for them to ask for.

In the analysis, we use a matrix view of stated importance vs impact (or derived importance/driver analysis) which helps to clarify which items are givens (i.e. high importance, but low impact). If you have a problem in an area which is a given, you’ll find that it becomes much more prominent for customers.

It can be a good idea to ask customers for ideas that would improve the experience, but be aware that some things come up a lot simply because they’re easier to bring to mind and talk about. Sometimes these are things which are very important, but only to a minority of customers, and other times they’re just something that customers think sound like a good idea.

This is actually a big danger in using research for product development. Don’t ask customers for solutions or ideas, but use the research to understand customers’ fundamental needs when using your product (a design thinking approach to the customer experience).

Q: We’ve recently switched to more online surveys, and we’re seeing lower response rates and satisfaction scores. Should we be weighting to reflect responses for different age groups?

These are two really important questions. It’s really good that you’re focusing on response rates, and there are things you can do to improve these. Some of the most important are:

  • Make the survey as short and easy to complete as possible

  • Make sure your survey software is mobile-optimised

  • Get event-driven surveys to customers as soon as possible after the event

  • Use A/B tests on subject lines, times to send invitations, text in the invitation, use of images, etc. to optimise all these variables for your own customers

It’s really important, particularly in industries such as social housing, to be aware of the links between survey channel, demographics, and levels of satisfaction. Understand the impact of survey channels on response rates by age group, and you should be able to unpick method effects from non-response bias. Ideally, find ways to allow all customers to respond in the way that suits them best.

Q: We’re seeing a lull in customer satisfaction, after a peak during the pandemic. What are the general trends?

There was a general “lockdown bump” to satisfaction in mid 2020, but that effect has long since dissipated. Since then we have seen that customer patience is wearing thin, and they expect organisations to have adapted to be able to cope with the pressures they face.

A good source of information for how UK organisations are performing in terms of customer satisfaction is the UKCSI. You can find a summary of the latest results here.

Since October 2018 we have been running a quarterly Index of Consumer Sentiment, to measure the confidence of UK consumers about their own finances and the future of the UK economy. You can see the highlights for the most recent wave in this infographic and a deep dive will be featured in the next edition of Customer Insight.

Q: Is there an approach with mixed methodology you’d recommend ? What do you try first?

It’s definitely a good principle to allow customers to complete a survey in whichever way suits them best (just like any other interaction). Managing a mixed-method survey can be tricky, so the more work you can do in advance to identify customers who are more likely to respond by phone or web, the better.

As a general rule: in business to business markets I’d favour starting with telephone and offering the option of a web survey; whereas in business to consumer markets I’d start with online and use telephone surveys to top up.

That’s definitely not set in stone, and a lot of it depends on who your customers are (engineers, for example, often prefer the opportunity to reflect on their answers and type them up in their own time).

Q: Benchmarking - how important is it really to understand your position in your sector, as opposed to across the board more broadly? Benchmarking tends to focus on sectors with a flavour for top performers out of sector, but in reality businesses then tend to focus from and take insight only from their sector performance. Specifically in the life insurance sector, customers aren't frequently taking life insurance policies with multiple different providers, so I've always struggled to see the true benefit.

The short answer is: we totally agree!

Sector benchmarking can help when it comes demonstrating the value of improving customer attitudes. It can help provide a context – so for instance we know that NPS scores tend to be lower for life insurance suppliers relative to their satisfaction scores. But it’s not usually a good source of insight. In fact, it tends to encourage complacency.

We’d always encourage people to benchmark outside their sector to find best practice and avoid being the best of a bad bunch.

You might be interested in our free webinar on benchmarking, available to watch on demand.

Q: Could you cover how to use a service blueprint in customer journey mapping/experience. I have been on your customer journey mapping course and I am encouraging colleagues to join this session to find out more about customer experience.

I love service blueprints! For me they’re a brilliant way to connect up the customer view of a particular journey with an internal view. If you imagine a customer journey and a process map as the pieces of bread in a sandwich, the service blueprint is the filling in the middle that binds it all together.

It comes down to 3 things: knowledge, thinking holistically, and centring the customer experience:

  • Customers can tell you what they experience, but they don’t know why it happened. Service blueprints surface the knowledge of the doers in your organisation about how the experience is made.

  • You create the service blueprint in a collaborative workshop, involving people from all departments who touch the journey. That often highlights bottlenecks, misconceptions, and points of failure that can quickly be addressed. For instance it’s often handoffs between departments that derail the journey.

  • Because you start with the customer journey, a service blueprint (unlike a process map) highlights places where customers can’t see what’s happening behind the scenes. This lack of information often causes unnecessary worry, which in turn leads to avoidable contact. A service blueprint makes those long silences obvious.

They’re a great tool and if you want to know more, once again we do have a free webinar all about them! You can watch the webinar on demand here.

Q: It's always been said that 'the Customer is always right'. So are there any exceptions at all?

My view is that “the customer is always right” is really just a compact way of saying “the customer is right more often than you probably think, and even if they’re wrong it’s a good idea to act as though they’re right.” There are two important aspects to this: customer experience and customer insight.

When it comes to the experience, what’s more important than whether or not the customer is right is whether or not they’re acting in good faith, and in almost all instances they are. Seth Godin said “There’s no such thing as an unreasonable customer”, and what he means is that, whatever their behaviour, the customer is acting reasonably given the information and experiences they’ve had.

So even if the customer is wrong, assume that they think what they think for good reasons, and instead of getting into an argument with them, focus on finding a way forward that works for everyone.

When it comes to insight, we often use Tom Peters’ phrase “perception is reality”. I believe that it’s in the perception gaps that you can find your greatest opportunities for improvement. So if you find that customers think something that you believe not to be true, don’t dismiss the survey, and don’t throw up your hands in despair, but dig into the perception gaps to find out why customers think what they do. The answer is often in unmet emotional needs, which you can address by setting expectations or improving communication.

I wrote a longer piece about perception gaps on LinkedIn here.

Q: No specific question, but the "tips for driving change from your survey" and "investing in text analysis?" in the mailing drew my interest!

My top tips for driving change from your survey would be…

  • Prioritise. Don’t try to do everything at once, but pick a small number of what we call priorities for improvement (PFIs).

  • Momentum is vital. Within each one of the PFIs identify some concrete actions that you can take quickly, and ideally visibly, so that staff and customers can see that the survey is being used. That sense of momentum is like a flywheel that can keep things going, but if you don’t use the moment of the survey to get it started it’s almost impossible to get it going later.

  • It’s a team effort. You can’t just deliver findings and recommendations and expect them to make a difference. You need to collaborate with colleagues to develop action plans that they believe in, and which are grounded in practical knowledge of how the business works.

  • Don’t underestimate the importance of communication. It’s what keeps customer experience front of mind for colleagues, and it’s also vital in helping customers to notice the changes you’re making, and understand that they’re coming about as a result of what they said in the survey.

When it comes to text analysis, the main thing to be aware of is that it will take a lot of investment and effort to get good results. The big dirty secret of AI and machine learning in general is that it requires a huge amount of human effort to train the algorithms to get good results. If you have tens or hundreds of thousands of comments to code, and you have thousands of examples coded already, then text analytics probably makes sense…but don’t expect an off the shelf solution to work for you.

There’s also a lot of potential for analysing text other than simply coding it. For example:

  • Looking at trends in word frequencies is much more interesting than just churning out a word cloud every quarter.

  • Comparing the words used by promoters, passives, and detractors can be a great way to understand the drivers of promotion and detraction. Or the words used by customers of top-performing account mangers versus the rest.

  • Looking at patterns of which words occur together in the same comment can suggest themes that you might not otherwise notice.

If you couldn’t make the session, but have a burning question of your own about customer experience and insight, we’re sure we can help.

Get in touch by completing our short online form or send us a message via email to uk@leadershipfactor.com.