
How to make your survey more behavioural

The last few years have seen many in-house research teams becoming more and more agile, developing the ability to create and deploy surveys using their own platforms. Certain skills and knowledge that were once hoarded on agency-side are now shared more democratically. One area of expertise that is less common is the design of behavioural insights or implicit tools into questionnaires. While client-side researchers often have a grounding in the principles of behavioural science, they are less often trained in how to incorporate those into survey question writing.
When Irrational Agency first started out in 2012, some of our first projects involved working with traditional research agencies to bring a behavioural science angle into the surveys and other research they were conducting for clients. Most agencies didn’t understand behavioural science back then – but they knew it had something to add, and so they brought in the experts. Now we often work directly with clients to do something similar in their own in-house research.
Last year we advised a top UK supermarket chain on how to make their customer satisfaction surveys more accurate and get respondents more motivated to complete them. We’ve also helped a telecoms group to build more behavioural questionnaires and a European bank to build a whole in-house behavioural insights team.
For those of you building questionnaires this summer and autumn, I thought I’d share some simple techniques you can use in both question-writing and analysis, to begin applying behavioural insights in your surveys. I’ve picked out three of the top principles from behavioural science and included some tips on how to make sure your survey takes into account those principles.
Principle 1: the brain is overworked and tired
Everyone’s busy. Everyone’s distracted. And respondents are – just maybe – not using the most focused, energetic, best-rested, quality time of the day to fill out a survey. You’ll get them while they’re watching TV or on the bus or on the loo. And the distraction (related to concepts like cognitive load and bounded rationality) means they glance at your question, skim the answers and pick a roughly correct answer.
So keep your questions three times as simple as you think they need to be. You don’t need to use slang or try to be ‘down with the kids’, but equally don’t be formal and fancy.
Instead of this (all of the questions shown in red below are taken from a variety of advice pages on how to write a good survey question. Don’t believe everything you read online!)
How would you describe your current perception of our product/service within the market landscape?
Try this:
Is our product: a) The best on the market b) Better than average c) Worse than average d) Dreadful
Try replacing questions that ask people to do mental gymnastics to give an answer, with something that makes it easy. For example, move from:
What factors influence your purchase timing and frequency (e.g., seasonal trends, promotions, etc.)?
To:
In the last two months, have you bought any products that:
- Were discounted by more than 30%?
- Were discounted by more than 50%?
- Were sold in a summer promotion (e.g. barbecue or Wimbledon)
- Were on offer because they were reaching their sell-by date
Or from:
How important is [Specific Feature/Benefit] to you when choosing a product/service?
to:
If you could pick a product that could only do two of these things, which two would you choose?
- Benefit 1
- Benefit 2
- Benefit 3
- Benefit 4
- Benefit 5
And finally, don’t expect people to change their mindset just because you tell them to. I recently saw a vendor suggest that you should add a ‘Considering’ clause to your recommendation questions, like this:
Considering your (recent) purchase experience, how likely are you to recommend (company/product name) to your friend or colleague?
Respondents are not going to give a more meaningful answer just because you said “Considering” before the question. They are either considering their recent purchase or they’re not, and this clause is unlikely to change how they think about the answer they give. At least not in a way that helps you!
Principle 2: people don’t like to commit themselves
Ever wondered why your answers cluster towards the middle of a scale, with nothing either brilliant or terrible but always ordinary? It might be because everything you do is average. But more likely, it’s because nobody wants to declare themselves totally happy or totally unhappy.
This phenomenon of extremeness aversion makes your scales less sensitive to differences between customers than they would otherwise be. So here are two ways to combat it.
First, always add 2 more items to a scale than you really want. If you’d like to know whether people are happy, average or sad, use a 5-point scale instead of 3. If you really want to measure something on a 5 point scale, use a 7. Then combine the answers of the outer two boxes into one (a bit like calculating a top-2 box score).
Some people are just curmudgeonly and really do want to tell you you’re the worst – or they love you enough to give you the best possible score. But these people’s views of your product are effectively the same as the people who are less opinionated but give you the second highest or second lowest score. So you can use the extreme elements of the scale as a psychometric test to find out who your respondent is, while not letting it bias your charts.
Second, remove the middle item. Instead of 5 or 7, use a 4-point or 6-point scale. Add a ‘not applicable’ for people who don’t have any experience or opinion on what you’re asking.
This will make things ever so slightly more challenging for your respondents, who won’t be able to speed through giving everything a ‘neither agree or disagree’ mark. Instead, they are forced to judge whether it’s better or worse than neutral. But the quality of your data will go up significantly.
The n/a box is important because some people genuinely have not experienced your returns service, or don’t have an opinion on the Chancellor of the Exchequer’s performance. These people would have picked the middle option in your 5-point scale, so give them an escape route.
Enough n/a and you can quality screen the respondent. If they really have no opinions why are they taking a survey?
Principle 3: tell some stories and get some back
We all engage with stories. Whether we are watching TV or sitting around the table sharing a drink with friends, we are listening to stories and probably telling some of our own.
Stories are known to be much more memorable and engaging than abstract facts. They switch on different parts of the brain and get people to care in a different way.
But most surveys still ask questions abstractly. They will say things like “considering all aspects of our service, how likely are you to use British Airways again?” or “What are the main ways in which you find out about new technology products?” followed by a multi-select list of checkboxes that all somehow seem to be roughly the same as one another.
These questions force people into a logical mindset that requires a high cognitive effort to construct the right answer. Because of principle 1 (overworked and tired) few people will make that effort. So the answer you will get is incomplete and inaccurate.
Engage people in a story and see how your results improve. For example:
Think about the last time you flew with British Airways. How did you book your flight? Did you think much about the trip before heading for the airport? Did you spend ages packing or throw things together at the last minute?
[You might pause here to let them answer these questions in an open-end – you’ll almost certainly get something interesting]
Now remember arriving at the airport. Did you use our check-in desk? Was security annoying? Did you wait at the gate for a long time? And finally, sitting down on the plane – what was your seat like and were the flight attendants friendly?
[Another open-end]
Where did your flight take you? How long was it? What did you enjoy most about the flight, and what did you not like so much? Can you suggest anything for us to improve?
Now you can give them a multi-select set of options, and an open-end for ‘Other’ or ‘What we could do better’. By putting them into a narrative mindset you help them remember far more detail and give you much more detailed, emotive opinions about their flight.
This approach won’t work in every survey – if your respondents will only give you 5 seconds to answer an NPS question, they might drop out when you start taking them through the whole journey (or they might not! It’s worth testing).
But when you have them in a longer survey, this engaging approach is going to give you more and better answers than expecting them to do all the brainwork themselves.
There’s a lot more you can do, of course:
- Using implicit tools to read their unconscious emotional or associative reactions
- AI-guided story-hearing, with interactive narrative prompts
- Projective image prompts
- Validated psychological scales to understand respondent personality and how it interacts with the way they use your service
But I hope these tips have been useful – do drop me an email (leigh@irrationalagency.com) if you’d like to talk about how to apply them to any of your own surveys. At the moment we are offering a fast, low-cost service to a limited number of clients to help them improve and adapt their in-house surveys.
Look out soon for a similar article on how to make your qual interviews more behavioural: there are always hidden truths and we have discovered a few good ways to get access to them.