AI thinks your customers are rational. What do you think?
You can hardly have missed the explosion of AI companies and products threatening to help you with your qual interviews. They will analyse the transcripts for you, write you a set of recommendations – and the latest products will even conduct the interviews, with a chatbot or virtual interviewer asking questions and interpreting the answers.
Another big group of startups are in the simulated/synthetic consumer space: agent-based software that generates a population of digital consumers and observes or predicts how they will behave. These are starting to become known as “digital twins” since they are supposed to act exactly like a human being (who might represent a segment of your audience, or the average consumer, or even a specific respondent in a qual study).
But our research has discovered that both AI moderators and digital twins usually fail to replicate an essential part of human beings: their irrationality.
Ask an LLM or a digital twin to predict how a consumer behaves. It will earnestly explain that they weigh up the costs and benefits of different products and pick the one that is most advantageous to them. It’s the same across the major AI models: ask ChatGPT, Claude or Gemini to tell you what a consumer will do when faced with a specific choice. All of them will try to enumerate the pros and cons of each option and pick the most moderate and logical option, without committing itself to a definitive answer.
But watch a real person behave, and it looks totally different. You’ll see them follow a series of psychological heuristics. Unconscious habits will kick in first – a drag on the cigarette or throwing 2 pints of milk into the shopping trolley. Then they might react emotionally. They pick a product because it reminds them of a familiar pattern. They tell themselves a story to imagine how each product will make them feel. They’ll try to guess which product or price gives them a fair deal. Unless something triggers a memory of a negative experience, in which case they’ll flee to a different product or even an alternative category.
Yes, they also add up the numbers to check they can afford what they’re buying – that’s the rational mind at work. But they make mistakes, often rounding up or down and getting the wrong answer.
This rich human behaviour is what we need to understand to make accurate forecasts or successfully launch new products or campaigns. Which means you have to teach your AI to take into account your customers’ behavioural biases.
Irrational Agency has run a number of experiments with leading LLMs to develop methods to build cognitive biases into their responses. If you’re using an AI-based tool such as a synthetic consumer/digital twin, or an LLM trained on your research outputs, we can help you to add this behavioural layer. Your AI will immediately start to make better predictions of consumer behaviour, and give you more honest answers about what your customers are likely to do.
You may ultimately want to conduct research at category level to find and incorporate the specific biases and narratives of your own category. Many of these biases do cut across categories (although they do differ by culture). That means you can rely on a single syndicated study to provide a good account of biases and heuristics which we can use to train your synthetic consumers or LLM to give better answers.
Other non-conscious influences, like the narratives and stories your customers buy into, or the attention constraints and level of bias that apply in your particular category (high vs low engagement), are specific to your category or even your brand. These can also be used to train an AI but may need to be collected in a bespoke survey first.
But as a first move, building in a general human understanding of biases to your current AI models will get you ahead of many competitors. Drop us an email (leigh@irrationalagency.com) for a free evaluation of your current platform to help you identify which biases or heuristics it is missing.