September 5, 2024
5.7.24

Understanding Human Decision Making

Human Mind

Understanding Human Decision-making

Behavioural science studies how and why people do the things they do and seeks to understand the factors which influence this. Behavioural scientists delve into the intricacies of motivation, decision making and communication, exploring how we form our judgements and how our judgements translate into actions. We examine the effect our behaviour has on others, and the impact other people's behaviour has on us. And we seek to understand how the world around us shapes all this. In this article we look into the way we make judgements and decisions and the things which determine good and bad outcomes.

Two Systems of Thinking

At the heart of behavioural decision science lies Daniel Kahneman and Amos Tversky’s Dual Process Theory, which posits that we use two different processes to make judgements and decisions, and that these processes can yield quite different outcomes even when using the same information. The reason for having two systems is simple necessity. We face a staggering number of decisions each day (roughly 40,000 according to some studies). Naturally, we can’t analyse every single one, so our minds have evolved two processes which we employ according to the complexity and the importance of the decision. Kahneman and Tversky called these System One, and System Two.

System One is a fast and intuitive mode of thinking. It mostly operates beneath our conscious awareness and demands minimal effort. When we encounter a simple task, like opening a door, System One kicks in effortlessly. We rely on visual cues, such as the position of the door handle, to decide whether to push or pull. While these decisions are usually accurate, System One tends to settle for solutions which are merely "good enough." It lacks precision. What's more, it is highly susceptible to external influences, which means it can be unreliable.

System Two, on the other hand, is a deliberate, analytical mode of thinking. When we are faced with complex or high stakes decisions, like manoeuvring a car into a tight parking space, System Two takes charge. We carefully assess the gaps between the car and potential obstacles, constantly monitoring the situation. System Two shines when it comes to dealing with abstract ideas and generally produces more accurate outcomes. However, it comes with a drawback: it is slow, and mentally demanding.

System One is driven by pattern recognition. When confronted with a decision, we instinctively search for familiar reference points and match what we see in front of us with existing patterns stored in our sub-conscious memory. We call these patterns recognition processes "heuristics". They are mental shortcuts or rules of thumb which have evolved to enable rapid decision making.

Heuristics – Algorithms for the Brain

There are two common misconceptions about heuristics: First, they are not flaws in our decision processes. They are essentially the algorithms our minds use to help us make faster and easier decisions. Without these decision-making rules we could never cope with the volume of choices which confront us every day. Effective heuristics are the bedrock of sound decision-making. They can take care of the multitude of small, inconsequential decisions which can overload us and prevent us allocating sufficient attention to higher stakes decisions and, as Gerd Gigerenzer pointed out, they may even outperform more complex decision systems in critical situations where speed is of the essence.

The second popular misconception about heuristics suggests they are the root of cognitive biases. This is not true. The outcomes we get from heuristic decisions can be good or bad, just as with more analytical decisions. Like algorithms, our heuristics need to be trained on sets of data (our experiences). The better the data we feed in, the better our decision outcomes will be, it is the context rather than the process which leads to biases.

We use many heuristics every day, but Kahneman and Tversky identified three types which appear frequently across most of the decision types we face. They are "availability", "representativeness" and "anchoring".

Availability - What You see is All There Is!

The availability heuristic is based on recognition. It describes our tendency to make judgements based on how readily information comes to mind. Daniel Kahneman called it WYSIATI - What You See Is All There Is. It places us and our individual experience at the centre of our universe. Availability can be summarised as the notion that if we can recognise or recall something easily, it must be more significant than a less easily recalled alternative. The availability heuristic generally works quite well but it can be confounded by highly emotive or newsworthy types of information. For example, people will often cite shark attacks as a more likely cause of death than common diseases, despite this clearly making no sense.

Representativeness – Where Stereotypes Come From

The representativeness heuristic describes the way we estimate the likelihood of something based on how similar it is to our mental image of the thing in question. The representativeness heuristic causes us to form a pre-determined impression of what somebody or something should look like. A research study showed how people judged someone wearing a suit as being more likely to be a lawyer than someone in a bathing suit, even when both photos were of the same person.

It's easy to see how both these heuristics can be disruptive to our decision making and to the way we treat people, because they use reference points which may be random or inaccurate. They involve making judgements and decisions with only partial information and they cause us to overlook or ignore other potentially pertinent information once we've formed an initial impression.

These heuristics are where stereotypes come from. If we've only ever seen a certain type of person in a role, we readily assume others in the same role will conform to its stereotype. If we have a pre-formed idea of the type of person in a certain role, we may assume only those with the same characteristics are suitable for the role.

The Power of Anchors

Anchoring is one of the most researched concepts in decision science. Anchoring is the process by which we use an existing reference point to estimate the value of something else. If we are asked to estimate how long a project will take to complete, we will use a previous project as a reference point and then iterate from it. Economists and financial analysts use anchoring all the time when making forecasts.

We might use anchoring when deciding how much we are willing to pay for something. It's also a commonly used tactic by companies trying to sell a product or service. Anchoring is a great tool if the reference point is relevant and accurate, as Philip Tetlock demonstrated with the Good Judgement Project, but it can lead to errors if the reference point is skewed. A great example of ineffective anchoring can be seen in a study from 1997, in which participants were asked to answer two questions:

- Half of the group were asked whether Gandhi was older or younger than 9 years when he died and then asked to estimate his exact age at his death.

- The other half were asked if he was older or younger than 140 years when he died and then asked to estimate his exact age when he died.

Unsurprisingly all the participants got the first part of the questions right, however, the group given the anchor of 9 years guessed, on average, that Gandhi was 50 years old when he died, while the group with the anchor at 140 years guessed he was 67 when he died. Even though the anchors were obviously misleading, they had a significant effect on the decision outcome.

The effect of anchoring is particularly powerful in negotiations and in forecasting because counter-offers or re-estimates remain anchored by the original reference point and new information is only used to make adjustments from the anchor, it does not override the first piece of information.

Bounded Rationality

We see so many errors and biases repeated time and time again that we simply cannot accept the Rational Economic Model of decision-making. Equally, however, the the ability of the human race to learn, adapt and evolve suggests our decision processes are not some sort of emotion-driven random walk.

But if it is not heuristic decision processes which lead to judgement errors, what is it? Herbert Simon explained human decision making as a type of "bounded" rationality, in which we strive to make sense of the world around us and make rational decisions based on the information available to us, but this ability is constrained by the limits of our cognitive capacity, the quality of the information we have and the environment in which we are making the decision. When constraints such as time or social pressure are present, external factors such as partial or imperfect information, poor data selection or emotion-driven responses can lead us to make ineffective decisions.

The Three Enemies of Rational Decision Making

Evolutionary Preferences

The mental frameworks we use in our decision processes, have evolved over millions of years. For most of this time, survival has been our primary concern and the threats we faced were often physical. To ensure our survival, System One responds rapidly to perceived threats with an in-built "freeze, flight or fight" heuristic. This has led us to develop some deeply ingrained preferences:

First, we focus on the immediate present rather than the future. Second, we pay more attention to risks than rewards, which means we focus on avoiding losses at the expense of potential gains. Both would have made sense when the immediate situation may have represented a life-or death threat but in today’s world it can mean ignoring or underestimating a long-term benefit relative to a short-term risk.

A third preference is to seek safety in numbers, which means we may find ourselves saying or doing whatever is necessary to avoid conflict.

These preferences are not random, they are systematic deviations from rational decision-making which deliver sub-optimal outcomes, caused by using the wrong frame of reference.

Information Overload

Information overload can be defined as having more information than we are capable of processing in the time available to do so. We are all in a permanent state of information overload these days; a recent estimate suggested 90% of the data which exists has been created in the past two years. This is bad because the amount of information we use to make decisions follows an inverted-U shape. Initially, as we get more information, we try to use more inputs in our decisions. However, beyond a certain point, we can’t process all the information, so we start to simplify and reduce the number of factors we use in our decisions. Once we become truly overloaded, we only use only a very small percentage of the available information in our decisions and the information we use isn't always the best.

Self Worth

Humans have a fundamental need for self-esteem, driving us to see ourselves positively and maintain a sense of self-worth. This can lead to attribution error and outcome bias. Attribution error occurs when we attribute our successes down to internal factors (like ability) and our failures down to external factors (like luck), in order to preserve our self-esteem. This can lead us to judge decisions based on the outcome rather than the process which encourages us to perpetuate poor decision processes which deliver positive outcomes, through luck, and abandon good processes which deliver undesired outcomes due to external factors or bad luck.

Making Better Decisions with Heuristics

In a world where most of our challenges no longer threaten our survival, a slower, more deliberative approach to decision making might yield better outcomes, unfortunately, we don't always have time for deep thinking. Furthermore, we aren’t always good at discerning which decisions to leave with the intuitive System One, and more importantly, which decisions to pass over to the more analytic System Two. We like to believe we predominantly rely on our analytical processes for important decisions, but our natural inclination is to conserve energy. Consequently, we often let System One take the reins then retrospectively craft a narrative to justify our decision and make it appear analytical.

Mitigating Cognitive Biases

Fortunately, because our time, risk and social preferences are a function of our evolution and are common to most of us, the biases which result are predictable and recognizable. This means we can introduce circuit breakers into our process, prompting ourselves to engage System Two when the stakes are high. We can also design our own heuristics to deliver the outcomes we want from higher frequency everyday decisions and for crisis-situations when speed and accuracy are both at a premium.

The Importance of Time

Time is probably the single biggest variable in effective decision-making. The more time that passes between a stimulus and a response, the more time there is for the risks to change and the response to become the wrong one. The key to making the best decisions is therefore to take as long as possible, within the constraints of the situation, to analyse the data we are using to inform the decision and then, once the choice is made, execute the action as fast as possible.

The best way to do this is having a pre-assembled set of tools and rules which allow us to decide on a decision process, select an option and execute the required action as quickly and accurately as possible. Well-constructed heuristics have been proven to be highly effective at saving time and energy in critical and non-critical decision tasks, without sacrificing performance.

Conclusion

It isn’t possible to eliminate heuristics, and nor should we try. They are the foundations of decision making in a complex world. They are highly effective when speed is more important than accuracy and they can the difference between success and failure, or even life and death, in a crisis situation. But, like algorithms, heuristics require training and they only work if we feed in the right information and use them in the right context. We will explore various heuristic decision tools in a future article but if you can't wait, get in touch with us to learn more.

References

Thinking Fast and Slow; Daniel Kahneman

Simple Heuristics That Make Us Smart; Gerd Gigerenzer

Super-Forecasting; Philip Tetlock

Stay in the loop

Subscribe for actionable insights directly into your inbox.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.