Raiding the Ivory Tower: How to Seek Academic Research Like an Expert
By: Thomas Leo Scherer, Lauren Van Metre, and Analise Schmidt | February 22, 2024
Smart policymakers know that foreign policy should be informed by the best available evidence. Too often, however, the policy process fails to seek evidence from academic research. For instance, it made intuitive sense when former Secretary of State John Kerry identified poverty as a “root cause of terrorism,” but scholars who studied the issue concluded that poverty is a poor predictor of who becomes a terrorist. Experience and intuition remain vital tools for policymakers, but the rigorous application of the scientific method is powerful for identifying patterns across time and space.
Scholarship imposes high standards of research quality seldom seen in op-eds and think tanks. In fact, the very purpose of scholarly research is different than an op-ed. Whereas authors of policy articles usually argue a particular case, science is about testing hypotheses. The peer review process upholds scientific principles of objectivity, transparency, and integrity and ensures the piece engages existing literature. Science, like the mortals who conduct it, is imperfect. It cannot by itself solve any particular policy challenge, yet a growing body of research can directly inform the decisions made by policymakers every day.
Many policymakers understand this value. In a recent survey of policy practitioners, 53% reported relating social science research to their work weekly. As the scholar Dan Reiter explains, scholars help policymakers know their tools.
Nevertheless, gleaning wisdom from the mountain of academic research is a big challenge! Academic research is often difficult to understand and is not designed to help policymakers learn. Further, foreign policy institutions have not cultivated a culture of learning. The policy-academic gap still needs bridging.
All policymakers should be capable of raiding the Ivory Tower and coming away with some treasure by seeking high-quality research to inform their decisions. This article provides tips to make the path easier by 1) searching better, 2) evaluating quality, and 3) creating systems to manage knowledge.
Search Better
From the mountain of hundreds of millions of academic articles available on the internet, here are some tricks for finding the articles most relevant to your policy question:
Use a Research Search Tool. Where general search engines like Google find results from the entire internet, specialty search engines can help focus your search on scholarly publications. Google Scholar is probably the best starting point for searching for academic literature. There are many other search tools to try: we like how SciSpace compiles a summary and Research Rabbit is useful for mapping pieces related to what you have already found and seeing what you missed.
Start with Review Articles. A review article broadly summarizes the research on a particular topic from a variety of perspectives. Sometimes called meta-review or meta-evaluation, these articles collect and assess numerous studies on a topic. In foreign policy, the Annual Review of Political Science is great for high-quality reviews. Google Scholar’s advanced search options let you search in a specific journal (as the previous link demonstrates).
Seek Reputable Sources: Get to know the most reputable journals in your areas of interest. A journal’s ranking is a good starting point; the top 20 or so have very strong reputations. A journal’s quartile ranking is another clunky but easy indicator — stick to Q1 and Q2. Each article also carries its own citation count that is usually listed with search results. Relatively higher citation counts can be used as a proxy for quality. Be aware of well-known gender/racial bias in citation counts and seek diverse perspectives.
Use Snowball Sampling. A snowball sample is when you ask your interviewee to recommend other people to talk to. In this case, we can ask our articles what other articles we should look at. If you find an article you love, follow the citations to other research highlighted by the author. You can also check the ‘cited by’ link on Google Scholar to see who is citing who. Most articles have a literature review section, often immediately following its introduction that you can skim for relevant assertions and the citations used to support them.
Select Time Spans Intentionally. Start with publications from the last three years and then expand to five or ten if the results are sparse. More recent publications often have better data or methods or a more helpful literature review. Try some searches without date ranges and look for oft-cited pieces. Older articles have more time to be digested, with some pieces becoming ‘foundation’ evidenced by much higher citation counts.
Get the Actual Article. Reading abstracts is great, but if an article looks interesting, you’ll want to download read the full text. When articles are hidden behind a paywall, there are other ways to find a copy. Check Google Scholar’s ‘All X Versions’ link for a [PDF] version of an article. Academia.edu is another useful source. Alternatively, many authors will post a PDF of their article on their personal website. Pre-print versions of an article did not yet pass peer review, but most of the time they are substantively similar. You can also email the author directly; most authors are thrilled to share their work.
Move Quickly. For all of these steps, if something isn’t giving you useful results on the first page or two, try different search terms or time spans. If a title grabs your attention but a skim of the abstract does not, move on. If you drop something truly useful, it will come back up in later searches. The priority is to not get bogged down in irrelevant pieces which costs precious time and morale.
Reach Out. If you find an especially relevant article, email the author and ask if they’d be willing to discuss the topic and share their expertise with you. As with any outside input, be wary of personal bias or incentives. Most authors are happy to discuss their work, but responses may be slow. Some may inquire about an honorarium or contract for their time.
Evaluate Quality
Evaluating the quality of evidence is crucial to answering “what do we already know?” Finding the right research to inform your policy question often requires sorting through competing claims and lower quality research designs. Experts are eager to have their assumptions challenged by high-quality research rather than only seeking evidence to validate their beliefs. Low-quality research is more likely to mislead, as when some bad research suggested Ivermectin as a Covid-19 treatment.
Evaluating evidence can be time-consuming, but with the high stakes of foreign policy, it is important to take every opportunity to ensure success. To efficiently evaluate research quality, we recommend answering these questions:
Is the Explanatory Variable Relevant? All policy interventions can be expressed as doing X to cause Y. The “X” here is the explanatory variable. It may be called the program, policy, treatment, intervention, or independent variable. If the evidence concerns a cause-and-effect relationship, what is the explanatory variable they measured? Sometimes, the specific details of the explanatory variable are quite different from what the summary would suggest.
Is the Explanatory Variable Actionable? An actionable explanatory variable is one that policymakers can realistically control with policy. Some explanatory variables, such as a country’s GDP or political system, cannot be easily manipulated. These may be interesting and useful for informing the conditions in which a policy will be effective (e.g. democracy vs. autocracy), but are less useful for informing policymakers what the policy should be.
Is the Explained Variable Relevant? The explained variable is the “Y” in the previous example, the outcome being studied. Investigate precisely what effect – outcome, result, treatment, or dependent variable – is being measured and whether it is relevant to the policy question at hand. Again, the details here matter. For example, an analysis of Covid transmission may use the date of a positive Covid test as the explained variable, but we know transmission occurs days before someone tests positive.
Do They Address Other Explanations? Flawed research argued in the 1960s that coffee caused cancer; the more coffee someone drank, the more likely they were to get cancer. But correlation is not causation. It turned out that it was not coffee causing cancer, but cigarettes — smokers were taking more coffee breaks. The lesson is that outcomes depend on many factors. There are many methods to deal with alternative explanations: randomized control trials, comparisons using matching or similar cases, having control variables, etc. No method is perfect, but at a minimum, the authors should attempt to address obvious competing explanations.
How Big is the Effect? Research may successfully show cause-and-effect but neglect to mention that the effect size is tiny. If a policy tool improves the likelihood of ending a civil war, but only by .001%, it might not be valuable. Authors should discuss both how the likely the impact is to occur, as well as the size of the impact.
Does the Evidence Engage the Field? Just like you’re asking “What do we already know?”, scholars must review the existing literature. A good review will assess the strengths and weaknesses of previous research, and discuss improvements for future research as all indicators that the authors are earnestly trying to understand the world.
Are there Cogent Policy Recommendations? If the piece offers specific, actionable recommendations implied by their research findings, this suggests the authors care about getting things right.
Do they Make Their Methods Public? Whether they conducted a survey, case studies, literature review, or quantitative analysis, do they describe their methods to a level where someone could attempt to reproduce their evidence?
Do they Make Their Data Public? Being transparent about the data collected and analyzed is important for quality assurance. Such data can be original interview responses, scores from a data collection effort, or anything else. Transparent data allows others to reanalyze the work or catch small errors.
Was the Evidence Collected Ethically? Scientists should be careful about manipulating people’s lives when collecting evidence. Failure to follow standard ethical principles suggests the research may have cut other corners as well. At a minimum, evidence collected from humans in surveys or interviews should mention ethical concerns or protocols followed, such as approval by an Institutional Review Board (IRB).
What does this research quality assessment look like in practice? To assess the evidence in the preventing and countering violent extremism space, we reviewed the evidence quality of over 150 documents. We found that many pieces of evidence scored well across the categories, demonstrating that high-quality evidence is readily available. In contrast, some documents (e.g. toolkits and resource guides) lacked evidentiary support and failed our quality assessment. This does not necessarily mean low-quality research is wrong, but one should trust it less. The figure below shows the distribution of quality scores across each indicator of quality.
Create Systems
Building knowledge and expertise means benefiting from all of the excellent research that preceded you. As you develop your subject matter expertise, you should consider carefully organizing the most valuable knowledge resources you find.
Store Your Evidence. As you review pieces, create a system to store your results. This could be a written bibliography, a spreadsheet, or citation management software like Zotero. This takes some front-end investment in setting up but will pay off as you apply it to similar questions.
Create a Knowledge Cohort. Once you have a method to organize your knowledge, invite others to collaborate. Make it a shared resource for others to contribute to and draw from. Sharing highlighted PDFs and document summaries among officials working on the same issue is a great way to supercharge the development of expertise.
Inspire Institutional Investments. Scaling systematic knowledge sharing to the organizational level ultimately requires institutional action. For instance, NASA transformed its knowledge management systems after the catastrophic loss of the Challenger Shuttle. The Department of Defense has a network of educational institutes, federally-funded research and development centers (FFRDCs), and research grants. Congress receives evidentiary support from the Congressional Research Service. We must make similar investments in foreign policy, perhaps in the form of an FFRDC for foreign policy.
The View from Atop the Tower
Discovering quality evidence from the academic cloisters takes effort but does get easier with practice and the right tools. Accessing the best available knowledge can mean the difference between success and failure. That said, raiding the Ivory Tower won’t get you any silver bullets or miracle cures for your policy woes. But somewhere in there is the best map of what we know, incomplete as it may be. We hope this guide helps you find it a bit faster so you can know what lies ahead on your policy adventures.