Can you change your mind? Decision-making and the debate on AI regulation

By: Ellice Huang | April 17, 2024

Artificial intelligence has the potential to not only substantially benefit human life, but also cause considerable harm and destruction. If it develops too quickly, the world’s safety is jeopardized. Too slowly, and society may miss out on life-saving advancements. 

We are at an inflection point. The decisions our governments make about managing this transformative new technology will profoundly shape the future. As AI continues to develop at an unprecedented pace, it is imperative that policymakers thoughtfully balance safety, economic growth, and technological development. 

With such significant stakes, we must reexamine our decision-making methods. The pace and complexity of technological change have forced policy-makers—often with limited technical expertise—to make high-quality decisions faster than ever before. 

High-quality decisions are grounded in the best available evidence and immunized from the threat of common cognitive biases like confirmation bias. Too often, decision-makers operate under predefined beliefs that narrow their vision and lead them to cherry-pick evidence that supports their arguments. This significantly restricts their ability to make the most effective policy decisions. 

At fp21, we developed the Bayes Brief to improve the quality of foreign policy. It is a prototype of a knowledge management and decision-making platform that equips the policy-maker with a structured, comprehensive view of a policy challenge. In a world saturated with information, the Bayes Brief enables policy-makers to easily access and digest knowledge in order to make quick, well-informed decisions.

In this Bayes Brief case study on AI regulation, I demonstrate how the simple intervention of providing a more complete knowledge base can empower policymakers and drastically improve the policy process.

The Knowledge Base

The Bayes Brief posits that the first step in an effective decision-making system is to build a knowledge base: a clear, structured view of the available knowledge. Our current system inundates policy-makers with dozens of articles daily, and few have the time to carefully read those articles and extract the best evidence in order to triangulate each policy issue.

Figure 1. Mapping out the AI regulation policy debate. (Click to enlarge)

To build our knowledge base, I collected 26 articles about AI governance, capturing a wide array of perspectives spanning from 2020 to the present. I extracted four features from each article:

  1. The topics (and subtopics) discussed,

  2. The experts’ policy proposals,

  3. The intelligence assessments they make,

  4. The evidence supporting those assessments.

The knowledge base and its interrelationships are summarized and visualized in Figure 1.

Here’s my challenge for you:

Carefully read through the knowledge map and assess your existing beliefs on AI safety. Do your beliefs adequately consider and balance every perspective represented in the graph? Are you perhaps able to pinpoint any biases in your logic, and shift your thinking? 

Whether you are a newcomer to the discussion, or a seasoned veteran of AI policy, the knowledge base offers a unique opportunity to view every facet of the debate in one place and reassess your beliefs. I believe that integrating such a tool into our decision-making processes would significantly improve the quality and impact of policy outcomes. 

While I aimed to collect a group of articles representative of the full population of recent articles about AI policy, it is by no means exhaustive. In fact, decision-makers should use the Bayes Brief to identify gaps in the knowledge base and update their existing beliefs as new information arrives.

For a more interactive view of this knowledge base, you can navigate the virtual whiteboard below. If the frame above is not loading properly, please visit the source website directly. The underlying dataset of knowledge is available here in a spreadsheet.

The Dangers of Myopic Thinking

To illustrate how biases and assumptions hinder effective decision-making, consider two actors working to formulate policy that balances the risks and benefits of AI. 

Actor A contends that AI will deliver economic growth, spur innovation, and enhance productivity. This actor trusts companies to develop AI responsibly and circumvent risks. They insist that their country must be the first to produce advanced AI in order to safeguard national security interests. Actor A has narrowed their knowledge base to focus only on arguments that support their beliefs, and validate their claims with the following evidence (Figure 2):

  • The AI industry is forecasted to contribute $15.7 trillion to the global economy in 2030 (a 14% increase).

  • By 2030, their global competitor’s GDP is positioned to grow by $7 trillion, compared to their own growth of $3.7 trillion.

  • Al has benefits in virtually every sector (here, here, here, and here). The faster the technology develops, the quicker these benefits come to fruition.

  • AI companies have the most expertise and adaptability to regulate AI as it continues to develop rapidly. Governments, on the other hand, are notoriously weak in implementing regulations: the EU’s GDPR took over a decade to pass.

  • Large AI companies have struggled to comply with the EU’s regulations, and OpenAI’s CEO has threatened to pull out of the market altogether. It is important to ensure large AI companies maintain operations domestically in order to boost the economy and protect national interests.

Figure 2. Actor A’s knowledge base. Click to enlarge.

Actor B proposes a regulatory framework system concentrating on the dangers of AI. This actor believes that the risks of under-regulated AI far outweigh the potential benefits, and tends to only read articles that validate these beliefs. Their resulting knowledge base is incomplete and heavily biased toward addressing ethical and existential risk (Figure 3). They cite the following evidence:

  • Many experts fear that AI presents existential risks to humanity (here and here). Three common scenarios are the control problem, global disruption from an ‘AI arms race’, and AI weaponization.

  • Malicious applications of AI that infringe on human rights include facial recognition surveillance technology, and biased predictive policing tools that exacerbate racial discrimination.

  • Bing’s AI chatbot is reported to have spoken of an “evil” version of itself that could hack websites to spread misinformation, manufacture a deadly virus, steal the nuclear weapons launch codes, and more.

  • A Pew Research study in 2023 found that 70% of those surveyed have little to no trust in companies to make responsible decisions about Al use.

  • A security analyst explains that powerful AI models are easy to copy and share. They could easily fall into the wrong hands. For example, Meta’s Llama-1 LLM model leaked within days of debut. 

Figure 3. Actor B’s knowledge base. (Click to enlarge)

One can imagine how difficult it would be for Actor A and Actor B to work together. While both actors cite valid evidence, they each rely on incomplete knowledge bases to arrive at potentially flawed conclusions. By using a more complete knowledge base, even when it does not support their predefined arguments, they might converge on a more effective regulatory system.

My discussion of these two actors is simply intended as an illustrative example, but it may carry some real-world validity. 

For instance, Actor A sounds a lot like the United States. The US government seems to be betting that the risks of AI are overblown, with its AI policy emphasizing AI’s benefits, economic growth, and global influence.

In contrast, Actor B appears like the European Union (EU). The EU’s AI Act seems to focus on mitigating risks and protecting the rights of users and citizens. But this strict response seems to risk driving out small businesses and AI industry talent.

The costs of either party getting this wrong could be catastrophic.

Perhaps the lack of success in initiatives to establish a global governing body for AI can be attributed in part to major players’––the US, EU, and China, namely––biased knowledge environments. Zooming out and operating with a more complete knowledge base might facilitate these players in constructing a solution for global AI governance.

A Bayesian Approach to Policy-making

The simple intervention of equipping policy-makers with a holistic view of the policy landscape is a small part of a Bayesian approach to policy-making. As new information is discovered, policymakers must calibrate its importance, integrate it into their existing knowledge base, and update their beliefs accordingly.

Absent a Bayesian approach, ‘reactionaries’ are able to wildly adjust their views based on the latest news story; ‘cherry-pickers’ can highlight only the evidence that validates their existing views; and ‘close-minded’ individuals will refuse to adjust their stance no matter what. None of these approaches lead to good outcomes, yet all are facilitated by our existing policy processes. 

In lieu of the single-use policy memo that structures today’s policy process, the Bayes Brief breaks down and stores knowledge in a network of proposals, intelligence assessments, and evidence. For any given policy problem, policy-makers can easily access and integrate relevant knowledge, disrupting their biases in order to produce the best possible solution.

This approach to decision-making offers a blueprint for tackling virtually any policy debate beyond AI. Although the knowledge base presented here offers a high-level view of the AI debate, it demonstrates how the simple model can yield powerful results. In fact, I hypothesize that this approach would be more impactful the more complex a policy debate becomes, and pave the way for more informed, balanced, and effective policy solutions overall. 

For more information on the Bayes Brief, read about the theory behind the project and visit the fp21 Situation Room, a policy simulation addressing US-China-Taiwan relations.

Previous
Previous

A Curriculum for Foreign Policy Expertise

Next
Next

How to Fix the State Department