Discover more from Global Shield's Newsletter
Global Shield Special Briefing: A preliminary government assessment of global catastrophic risk
The latest policy, research and news on global catastrophic risk (GCR).
On 30 October, the RAND Corporation released an assessment of existential and global catastrophic risk. It is an extensive and lengthy treatment of six catastrophic threats: nuclear war, climate change, pandemics, artificial intelligence, asteroid or comet impacts, and supervolcanoes. This report is particularly important because it was commissioned by the US Department of Homeland Security as input into their own assessment as required by the Global Catastrophic Risk Management Act.
In this special, slightly extended, edition of the Global Shield Briefing, we will draw out some of the important conclusions identified in the report. While Global Shield’s views of global catastrophic risk does deviate from the report in some places, as any analysis of uncertain national security risk might, this report is a useful contribution to the field and contains important implications for policymakers.
Dealing with the increasing risk of global catastrophe
The report assesses that “Overall, global catastrophic risk has been increasing in recent years and appears likely to increase in the coming decade.” The risk of naturally occurring pandemics is increasing due to human behavior, but the risk of a deliberate pandemic continues to increase as the technologies of modern biology become more powerful, affordable, and accessible. For climate change, the consequences – like extreme weather events, sea-level rise and ecosystem degradation – will continue to escalate in future decades but different places will feel them over different time-horizons. The risk of nuclear war is increasing due to heightening geopolitical tensions and the growing size in nuclear arsenals. And AI risk will continue to grow in the next decade as AI becomes more capable and more widely used.
Policy comment: Policymakers must recognize that global catastrophic risk is not an extremely unlikely or long-term phenomenon. At the individual threat level, global catastrophic risk is increasing, primarily due to geopolitical, governance, environmental, technological and societal challenges and trends. And the risk is greater than domain-specific analysis would indicate because some threats intersect in novel or unpredictable ways and humanity could face multiple hazards simultaneously or in quick succession. The next decade is critical for global catastrophic risk reduction as a whole. Policymakers must address it with the following policy principles: (1) urgency: immediate policy prioritization, development and delivery; (2) all-hazards: governing and managing the risk as a whole, not in governance or domain siloes; (3) whole-of-government: all arms of government to own, act and coordinate on policy efforts; (4) integrated: global catastrophic risk integrated and considered within existing government strategies, frameworks, policies and processes.
Estimating natural risk
In RAND’s analysis, total global catastrophic risk is largely due to human activities, which are driving both the threats as well as potential responses to them. RAND assesses that supervolcanic eruptions and asteroids or comet impacts are highly unlikely. An extremely large volcanic eruption on the scale that would be considered a supervolcano would statistically occur approximately every 15,000 years. For asteroids, impactors that would cause country-sized devastation would occur around every 100,000 years. Global devastation – brought by asteroids of about 3,000 meters in diameter – would occur roughly every 10 million years. The likelihood of a comet impact is less than 1 percent of that for asteroids.
Policy comment: The report potentially underestimates the risk of catastrophe from natural sources. Quantifying the risk is difficult due to remaining knowledge and research gaps, systemic complexity and human-driven factors. RAND assessed the scale of risk from supervolcanoes and near-Earth objects mostly as driven by the size of the hazards. However, even small or moderate-sized volcanic eruptions or asteroid impactors could cause catastrophic consequences if they are located near critical systems and infrastructure, such as nuclear facilities or submarine cables. For example, volcanologists have pointed out seven global “pinch points” where low-magnitude volcanic activity clusters around important global systems. Another major factor in the risk is early warning. Both volcanic eruptions and asteroid impacts can happen with very little notice. Warning systems and planning could be critical for avoiding major disruption to infrastructure and governance, as well as for enabling broader global readiness. Furthermore, both types of hazards provide a similar pathway to global catastrophe, which is a major loss of food supply due to long-term atmospheric aerosols. Therefore, food system resilience could massively reduce risk from all sources of global catastrophic risk.
Directing technological drivers
The report highlights a number of factors and trends that are determining the current state and future development of global catastrophic risk. These factors influence many or all of the six threats and hazards assessed in the report, as well as others not considered in the report. They include advances in technology; lacking, immature and uncoordinated global governance; individual and societal development needed for stability and resilience; and intersection among the threats and hazards.
On technological advancement specifically, the report notes that the technological change sits at the root of many of the global catastrophic threats, and also can provide solutions to them. How technology is developed, deployed and governed could greatly shape the direction of global catastrophic risk as a whole.
“Adoption of new AI capabilities and applications across societies and economies raises potential risks to equitable improvements in community prosperity and advancement of human capabilities. Continued advancement in the fundamental understanding of genomics, immunology, and molecular biology raises prospects of both intentional and accidental national and global biological incidents. Diffusion of nuclear technologies to additional countries or nonstate actors raises the prospect of both regional and global nuclear conflict. And applications of AI technologies further amplify nuclear and biosecurity risks, as well as risks from climate change.”
Policy comment: Policymakers should ensure that technology policy is guided by considerations of global catastrophic risk. Governments could implement a systematic process to evaluate and categorize existing or emerging technologies based on their potential to pose catastrophic risk – similar to the process outlined in another RAND series. They would then need to continually refresh the risk assessments, re-evaluate impacts on critical infrastructure and national security, and engage in foresight and forecasting exercises to test assumptions and existing policies. This effort would help inform more specific policy measures to reduce technology-driven risk. For example, technology governance could include requirements around safety and security, transparency and verification for technological applications, and accountability measures like liability and insurance schemes. Ultimately, proper risk analysis will help governments avoid the pitfalls of the Collingridge Dilemma, whereby societies realize the full extent of a technology’s risk and harms only after the technology is pervasively adopted and difficult to manage. Governments should also look to incentivize technological development that would have a disproportionately positive impact on global catastrophic risk. Important areas of focus could be renewable energy, carbon capture and storage, climate modeling, monitoring and management of natural disasters, alternative and resilient foods, ecosystem remediation and restoration, vaccine development platforms, resilient infrastructure, and advanced verification and monitoring tools for weapons of mass destruction.
Facing and tackling uncertainty
The report brings significant attention to the challenge of uncertainty in risk assessment. Due to inherent lack of data and information on highly unlikely and in some respects, unprecedented events, it is difficult to know what threats might arise, how they manifest, and whether humans would survive or recover. One main recommendation from the report is for the US government to “develop a coordinated and expanded federally funded research agenda to reduce the uncertainty about global catastrophic and existential risks and to improve the capabilities for managing such risks”.
“In many cases, strategies for managing global catastrophic risks face conditions of deep uncertainty in which any probability estimates are, at best, necessarily imprecise, and the consequences of potential risk management actions are, at best, imperfectly understood or recognized ignorance where even the possible outcomes are not identifiable. Deeply uncertain processes can, however, have significant policy implications, so they are important to consider lest one succumb to the proverbial fallacy of looking under the lamppost for risk management solutions.”
Policy comment: Uncertainty complicates policymaking. It makes risk assessment difficult because the likelihood, impact and potential scenarios cannot be easily quantified or determined. It also makes policy prioritization difficult because it might not be clear which threats or hazards require attention or which policies would be most effective. However, a high degree of uncertainty must not negate the need for policy action on global catastrophic risk. The threat of terrorism or a major kinetic war between nation-states, for example, face significant uncertainty, but there remains political and public consensus of the need for better risk management. Uncertainty simply calls for a more nuanced approach. First, reducing uncertainty must be a priority. Techniques that improve monitoring, detection, foresight, horizon-scanning, warning and scenario development would help improve clarity about potential futures. Second, policy efforts must treat global catastrophic risk as a whole. As the report states, “managing these risks will require a portfolio approach with multiple actions operating in synergy” – or as we call it, an all-hazards approach. Third, and relatedly, a key focus area must be planning, preparedness and resilience. While the threat vector might be uncertain, the same set of critical systems will need protection, such as infrastructure, food, water, energy, governance and healthcare. Shoring up these systems for traditional risk is a first step, but they must also be ready for catastrophic disruptions.
Enhancing response and recovery for AI risk
The report provides a useful articulation of AI risk. It states that “Ultimately, the risks that AI poses can be understood primarily as an amplification of existing risks both acute (e.g., nuclear war, pandemics, climate change) and slower-moving (e.g., disruption of social, governance, economic, and critical infrastructure systems; disempowerment of human decision-making).” It analogizes AI risk to “pouring gas on a fire”. The report notes that the likelihood and timing of catastrophic risk from AI remains highly unpredictable due to the inherent uncertainty of AI progress, intersection with other catastrophic threats, and the type and impact of yet-undecided governance choices.
We broadly agree with the report’s assessment of AI risk, and believe it legitimizes the concerns of advocates for greater safety in AI development and deployment. Our main disagreement is the report’s assessment that response and recovery is “not applicable, given the uncertainty about potential pathways and outcomes.”
Policy comment: AI risk must not only be managed by regulating the development and deployment of safe, secure, and trustworthy AI systems. It must also be managed through policies that would improve preparedness, response and recovery should a crisis occur. Even if the pathways to catastrophe remain hazy, a range of policies could enhance the public and private sector’s ability to respond and recover from various forms of AI incident. Indeed, if AI creates risk by “pouring gas on a fire”, then better firefighting capability is one way of managing the risk. For example, an accountable legislative framework would define how and when AI developers, deployers and end-users could be criminally and civilly liable for harms caused by AI systems. It would incentivize action to respond quickly to an AI crisis, and provide recourse to those harmed for their recovery. The potential for catastrophic AI risk also needs to be built into existing national crisis response plans. These revised plans would need to outline the protocols for addressing catastrophic AI incidents, including rapid “recall” or shutdown of AI models, mobilization of government resources and capabilities, coordination between government agencies, and collaboration with society and the private sector.
p.s. Be sure to also check out a recent assessment from Global Shield’s Director of Policy, Rumtin Sepasspour, of the impact of a second Trump administration on AI policy, which was included among other contributions to the Bulletin of the Atomic Scientists.
This briefing is a product of Global Shield, the world’s first and only advocacy organization dedicated to reducing global catastrophic risk of all hazards. With each briefing, we aim to build the most knowledgeable audience in the world when it comes to reducing global catastrophic risk. We want to show that action is not only needed, it’s possible. Help us build this community of motivated individuals, researchers, advocates and policymakers by sharing this briefing with your networks.
Subscribe to Global Shield's Newsletter
The latest policy, news and research on global catastrophic risk