“It may please us to think of ourselves as instinctively good judges and decision makers, but the reality is that the mistakes we make, and repeat, are often the result of common, observable, and predictable biases.”
In the first three articles in our series on effective decision making we learnt about the damaging effects of uncertainty and information overload on decision making and we saw how these are amplified in situations where we are under pressure.
We also examined the concept of bounded rationality (Simon 1972) and saw how we have developed a toolbox of rules, tricks, and short cuts, known as heuristics, which help us make decisions when we are under pressure.
We looked at the dual processing system (Kahneman 2011) that our brain uses to make decisions: The fast and intuitive but emotional and error-prone System 1 and the more analytical and accurate but slower and laborious System 2. System 1 thinking can be highly effective when making simple decisions and processing everyday tasks, but when things get complicated it can leave us exposed to cognitive biases that often end in bad decisions.
In this article we look at some ways we can improve our decision making, harnessing the speed advantages of System 1 and the analytical capacity of System 2 and mitigating the risks associated with predictable biases.
Effective decision making is a constantly evolving cyclical process with five stages: Framing, information gathering, judgement, choice, and learning.
The first step of a decision should be to frame the question correctly. To do this we need to be clear what type of decision we are making. A common cognitive error, when faced with a tough decision problem, is to substitute the difficult question with a similar but easier one. An example of this might be the complex question that we face when hiring: “Is this candidate likely to be effective in the role we are interviewing for”? A simpler, substitute question might be: “Does this candidate interview well?”, or even “Do I like this candidate?”
Worse, we may conflate this with other biases. For example, the availability heuristic: “What comes to mind when I think of a successful person in this company”? - “Me, of course!”, and we then add question substitution to the mix, so we answer the question “How suitable is the candidate?” with a simpler question, “How similar is the candidate to me?” This is one of the most common causes of diversity issues.
We therefore need a simple framework for understanding the type of decision we are making and then we will be able to frame the question correctly. A useful tool is to map the decision problem onto a matrix constructed from two questions: Is the decision a one-off or part of a sequence of decisions? And is the decision directive, requiring a clear choice, or analytical, leading to discovery or gathering of information?
A one-off directive decision requires a simple action as a result: “Do we do a) or b)?” A sequential directive problem will need a more complex outcome or strategy. For example, “If outcome X occurs, do we do a) or b) and if outcome Y occurs, do we do c) or d)”? A one-off analytical question requires a simple informational response such as, “What is the cost of option a)?”, while a sequential analytical problem invites a framework to help make sense of the data: “Can we create a table that shows the relative costs and benefits of the four possible options?”
The question that is posed to the decision maker(s) must be phrased in a way that elicits the type of response that is required. Ask the wrong question and the wrong answer is virtually guaranteed.
The next step is to gather the required information. In crisis decision making, this may have to happen quickly, but wherever possible it is important to take time to ensure that all the relevant information is considered. If we only use a biased sub-set of the information available to us, the resulting decision will obviously be biased.
A golden rule of effective decision-making is “never start with a hypothesis”. If we begin with a fixed idea of the outcome, confirmation bias tends to follow very quickly. We end up seeking out information that supports the hypothesis and ignore everything else. Wherever possible we must gather the data first and then examine it to see what possible hypotheses emerge.
A useful framework for information gathering is to break the process down into three distinct areas. The immediate decision problem, which entails anything that has a direct impact on the decision itself, the transactional environment, which includes all the actors who can influence or are likely to be influenced by the decision, and the broader macro environment, which is comprised of all the external factors that could influence the behaviour of the actors in the transactional environment.
With a correctly framed question and effective information gathering, we can begin to look at the decision itself.
In decision-theory, we define a decision as “the irrevocable allocation of resources, in the sense that it would take additional resources to change that allocation” (Matheson & Howard 1968). Judgements are the criteria that we use to determine that resource allocation.
No matter how good the preparation phase, if the judgements we make are incorrect, we cannot hope to make effective decisions. The two most common sources of judgement error are 1) failure to correctly estimate probabilities, which results from overconfidence and from availability or representativeness heuristics and 2) selective information or confirmation biases, where we only seek out and use information that supports a preconceived belief or hypothesis.
There are many excellent methods for improving accuracy in estimating probabilities. We can use iterative methods such as the Delphi technique, in which a group of participants makes a set of estimates which are then discussed by the group and then re-estimated through a series of iterations. Then there are reductive methods, such as the Fermi technique, where we break down a seemingly intractable question into a series of simpler questions. For example, if we want to estimate how many piano tuners there are in Chicago, we might start with the population of Chicago, from that we could estimate the number of households. We would then estimate how many households own a piano, based on the evidence of our own acquaintances. Then we figure out how often a piano needs tuning and work out how many pianos someone could tune in a day. These methods have been shown to consistently deliver much greater accuracy than single estimates.
If we have the time and the resources, we can also source estimates from independent experts or even panels of independent non-expert researchers. The latter has been used effectively by the forecasting expert Philip Tetlock in the Good Judgement Project (Tetlock 2017).
To reduce the risk of selective information and confirmation bias it is important to remove criteria that do not add diagnostic value even if they seem to be factually valid. When we have information that supports several options, it is easy to use it to validate a preconceived preference. This is known as testing for “diagnosticity”. Simply, we need to establish whether each of the criteria adds value to the judgement independently. Any factor that could potentially support all the possible choices in a decision set must be discarded.
With the judgement criteria established we can begin to evaluate and eliminate options until we arrive at the decision. The way we do this will depend on the resources available for the decision and the importance of the outcome. Schoemaker and Russo (1994) identify four tiers in their Pyramid of Decision Approaches. At the bottom of the pyramid are simple intuitive decisions and at the top there is complex value analysis. With each tier there is a trade-off between increasing accuracy and a greater resource requirement in terms of time and cognitive effort. The way we structure these approaches is known as the choice architecture.
The choice architecture is a set of steps that help us select one option from a set. The table below shows four possible methods that we can use, which correspond to the pyramid tiers.
The simplest method is satisficing (SAT). Here we would simply select the first option that satisfies all the judgements, irrespective of whether other options perform better. In the example, Option 1 is good enough.
The second method is called Lexicographic (LEX). In this method, we would decide on the most important criteria and pick the option that performs best on that. In the table, it is Option 2.
Moving up in terms of accuracy, but requiring greater effort, we can try elimination by aspect (EBA). Here we systematically go through the judgement criteria ranked in order of importance, eliminating any option that does not meet a certain performance requirement. In our example, EBA leads us to Option 3.
Finally, we have the Additive method (ADD). This is the most onerous but the most accurate. In the ADD method, we would score each option across the judgement criteria and then sum the scores to give a value ranking.
We can even take this one step further, giving the judgement criteria weightings according to their importance. This is known as the Additive plus or ADD+ method.
The learning stage is a vital piece of the process that differentiates the decision cycle from linear decision making. Learning generates new information that feeds back into the start of the new cycle.
Every decision process should end with a four step debrief: 1) What did we do that we would do again? 2) What did we do that we would not do again? 3) What did we not do that we would do next time? 4) What did we not do that we are glad we did not do? This learning should form the basis of the preparation stage for future decisions. Every decision process should start with a simple question: what have we learned from previous decisions that may be relevant for this decision?
“Our decision-making processes have evolved over thousands of years. They have yet to adapt to the modern world threats of complexity, uncertainty, and information overload. These challenges require us to be analytical and accurate in processing information and making choices, but we are pre-programmed to fall back on intuitive, instinctive thinking when we are under pressure.”
Decision making should be seen as a science, not an art. While it may please us to think of ourselves as instinctively good judges and decision makers, and it is easy for us to explain away our decision errors as the result of a changing environment or just plain bad luck, often the mistakes we make, and repeat, are the result of common, observable and predictable biases.
Our decision-making processes have developed over the millennia. We have fast and intuitive processes that allow us to assess important information rapidly when we are under pressure and we have deeper analytical processes that allow us to solve larger and more complex problems. However, today it is complexity, novelty, uncertainty, and information overload that cause us to become stressed, not sabre-toothed tigers. These threats require us to be more analytical and accurate in processing information and making decisions, but we are pre-programmed to fall back on intuitive System 1 thinking when we are stressed.
Understanding how we make decisions, having a knowledge of preferences in human behaviour that are repeated time and time again, and being familiar with the biases that skew our judgements can make us much more effective at making decisions under pressure.
If we can overlay this with carefully constructed frameworks that help us process the right amount and type of information to debias our judgements of the options presented to us and to select the correct option without emotion creeping in, we will consistently make fewer errors and achieve better results.
Kahneman, Daniel. 2011. Thinking Fast and Thinking Slow. Farrar, Strauss and Giroux, New York, NY.
Matheson, James E., and Ronald A. Howard. 1968. “The Principles and Applications of Decision Analysis.” Strategic Decisions Group.
Schoemaker, Paul J. H., and J. Edward Russo. 1994. “A Pyramid of Decision Approaches.” In Decision Theory and Decision Analysis: Trends and Challenges. https://doi.org/10.1007/978-94-011-1372-4_4.
Simon, H A. 1972. “Theories of Bounded Rationality.” Decision and Organization.
Tetlock, Philip E. 2017. Expert Political Judgment: How Good Is It? How Can We Know?, New Edition. Expert Political Judgment: How Good Is It? How Can We Know?, New Edition.
If you would like to learn more please leave your email below and member of our team will be in contact.