Human Judgment: How Accurate is it, and how can it get better?
By: John Wilcox | February 13, 2023
Officials working in foreign policy make judgments about countless international challenges, and the success of their policies depend on those judgments. Will a foreign assistance program in a particular country achieve its objectives? Will a stronger deterrence posture force an opponent to back down or escalate? What are the downstream consequences of a particular bilateral treaty?
Drawing from my background in cognitive science and empirical epistemology, I investigate the accuracy of judgments like these in my new book, “Human Judgment”.
The book concerns two topics to do with human judgment: How accurate is it, and how can it get better? Two noteworthy implications for foreign policymaking emerged from my research: one pessimistic, and one optimistic.
The bad news is that the science suggests that human judgment is often much more inaccurate than we might hope or expect. For example, some researchers estimated as many as 40,000 to 80,000 people in the US die each year because of preventable medical diagnostic errors. Other research is similarly pessimistic. Likewise, some researchers estimate at least 4% of death sentence convictions in the US are false convictions.
One must grapple with the implication of these findings for the craft of US foreign policy. It is no secret that foreign policy often results in failure. Might failures in human judgment be at least partly to blame for such outcomes?
Nobody expects perfection, but my research for this book suggests that humans are often bad at realizing the infirmities of their own judgment. What’s more, when presented with evidence of their own inaccuracy, humans often dismiss it or attribute it to “bad luck.” It is obvious how such phenomena could, in principle, hinder foreign policy.
The good news, however, is that science also suggests ways to improve the accuracy of human judgment. The Good Judgment Project and other research funded by US intelligence sheds light on how to do that. For example, research demonstrates that some individuals can be extraordinarily well “calibrated” in their judgments. Perfect calibration occurs when predictions with, say, 100% confidence turn out to be correct 100% of the time, whereas predictions made with 20% confidence will be proven correct exactly 20% of the time. Poorly calibrated individuals (indeed, most of us are poorly calibrated) demonstrate wildly imprecise confidence in their own judgment: they may claim 100% confidence in many predictions that turn out false, or wrongly claim things are 50/50 when they’re not.
People vary considerably in their accuracy, but many do not realize it. How much better would foreign policy be if decision-makers were perfectly calibrated–if they were provably correct about, say, the chances of a proposed policy’s success? Might national security spending be more effective? Could catastrophic disasters be averted? How much better would foreign policy outcomes be if decision-makers were able to improve the quality of their judgment?
Fortunately, research suggests a number of ways to improve calibration and accuracy:
Use base rates and statistics: Numerous studies have indicated that use of “base rates” correlates with, or improves, accuracy. A “base rate” is the frequency with which something has happened in the past; for example, in estimating probability that a particular negotiation tactic would broker a ceasefire of the Yemen civil war, one relevant base rate would be the frequency with which similar tactics have proved successful in the past. Research shows that more accurate individuals are more likely to draw on “outside” information like this, while less accurate individuals are more likely to ignore such base rates or to think they are irrelevant in virtue of being less specific to the question at hand.
Practice cognitive control: Another study indicated people with greater accuracy are likely to have greater cognitive control–that is, as I describe in the book, a “greater ability to override intuitively appealing but incorrect responses”, to “avoid jumping to conclusions” and to “engage in more prolonged, careful consideration” that helps them arrive at the correct answer (p. 106). This suggests policy decision-making processes could be undermined by inaccurate “gut-feeling” analyses that are based on intuition.
Practice active open-minded thinking: Research suggests people are likely to be more accurate if they engage in more “active open-minded thinking”. Here, active open-minded thinking refers to “the extent to which an individual considers evidence against their favored opinions, spends enough time on a question before giving up, and takes into account the opinions of others when forming their conclusions” (p. 105). As fp21 researchers have previously argued, this suggests policy processes can be undermined by the sometimes “adversarial” and legalistic approach that policy officials sometimes take to defending their preferred policy choices.
Be accountable: Some research suggests that people make more accurate judgments when those judgments are accountable to others. This can look like holding decision-makers accountable for either the outcome of their judgments (i.e. whether those judgments are true) or the process by which those judgments were formed (i.e. whether they took into account appropriate sources of information). Again, it seems plausible that such accountability could improve foreign policymaking processes and outcomes.
Take training: Certain training is also effective at improving calibration and accuracy, according to research, though not all training is equally effective. Consequently, foreign policy and other organizations seeking to improve judgment and decision-making should consider adopting evidence-based training.
These are a few of the numerous ways that judgment and decision-making can be improved. Those interested in other ways and further research can check out the book, Human Judgment.