Teaching Data-Driven Decision-Making

By: Dan Spokojny | August 28, 2023

fp21 CEO Dan Spokojny discusses causal models with ASEAN government officials. Aug 9, 2023

Last week I finished teaching a course on “Data-Driven Decision-Making” for a White House-led initiative called the U.S.-ASEAN Institute for Rising Leaders at the Johns Hopkins School of Advanced International Studies. This class included more than thirty extraordinary government officials from across Southeast Asia, many of whom work in their Foreign Ministries.

I was excited that SAIS chose to feature decision-making as a core skill to share with the ASEAN Rising Leaders. The field of foreign policy is ripe for change, and we know that improved decision-making methods can enhance security, peace, and prosperity.

My top-line argument for the course was:

Our mission is to use the best available evidence to make the best possible decisions.

Over the course of the class, I asked the participants to develop their own data-driven policy to solve a policy challenge they identified from their job. The goal was to apply the skills vital for building a data-driven policymaking culture and demonstrate that many objections to data-driven decision-making are exaggerated.

We began the class by discussing how traditional decision-making processes are conducted across our different governments. Most of us agreed that decisions often come down to the beliefs and instincts of the senior-most officials. “We provide lots of analyses and data,” noted one participant, “but our Minister usually decides what policy they want to implement based on their own experience.”

Next, we reviewed the research cognitive psychology in order to examine the strengths and weaknesses of decision-making processes based on intuition. For example, confirmation bias is the tendency to search for information consistent with one’s existing beliefs.

This set the stage to discuss how a more scientific approach to decision-making can help overcome a range of common human biases and reduce the uncertainty inherent in all policy environments. We discussed how thinking like a scientist can help policymakers use data and evidence to improve the impact .

Then we got to work on our policy memos. Each section of the memo invited

  1. Problem & Goal: We discussed how to set clear objectives and describe the desired end state.

  2. Theory: We practice techniques for developing goals into strategies using the Theory of Change model, which asks policymakers to carefully delineate all the assumptions and actions that connect a policy to its ultimate goal. Theories of change are standard practice in many endeavors but unfortunately are still lacking in the policy and strategy space. Data is ineffectual if one’s goals are ambiguous. We hope to change that!

  3. Evidence: The goal is not simply to find data that validates one’s existing assumptions, but to think like scientists who carefully test their theories by gathering evidence. Participants sought evidence to evaluate the wisdom of each step of their theory of change.
    We also discussed analytical techniques and data quality, including how measurement can be vital for reducing uncertainty about a policy’s likely success. We reviewed the conditions under which we can be confident claiming an intervention will “cause” a desired change. 

  4. Forecast: The goal of forecasting is to use the gathered evidence to evaluate the likelihood of success of the key stages in one’s theory of change. Too many well-intentioned policies fail to stand up to scrutiny. This step of the process encouraged participants to compare multiple policy options and choose the one most likely to succeed. We practiced forecaster techniques proven to increase one’s accuracy.

  5. Monitoring: Strong monitoring and evaluation frameworks produce data and feedback throughout policy implementation, helping understand where a policy is succeeding and overcoming obstacles along the way. Further, healthy organizations continually produce data about the successes and failures of their policies in order to create evidence about what works.

Teaching this material reminded me of a study I once read that compared businesses “pretending to be more data-driven than they actually are” versus organizations that placed data at the center of their operations. The organizations that “pretend” were consistently outperformed. The lesson for government decision-making is that it’s quite easy to fake data competency. Anyone can throw around a few numbers and pretend to be using data to improve their decision-making process. But building the strongest possible institution requires new types of expertise and organizational processes.

Overall, I was very impressed by the participants in the U.S.-ASEAN Institute for Rising Leaders. It was interesting to learn how each government has subtle differences in its decision-making and training processes and cultures. Judging from the depth of our discussions and the quality of their policy memos, the US-ASEAN relationship is on an excellent trajectory.

Previous
Previous

New State Department Diversity Data Exposes New Challenges and Opportunities

Next
Next

Evaluating Policy Success and Failure in Foreign Policy: A Better Approach