Not all existential risks are equal in controllability. Some — like asteroid impacts — are largely a matter of detection and technology. Others — like nuclear war and pandemic preparedness — are almost entirely political and social choices. Understanding which levers we actually hold is the first step toward pulling them.
Overview
Which Threats Are In Our Power to Control?
Entirely a human policy choice. Treaties, de-escalation, and arms reduction directly reduce risk. The New START treaty expiry in February 2026 leaves no binding nuclear limit for the first time in 50 years.
AI can improve early-warning systems and reduce miscalculation — but autonomous weapons and AI-accelerated cyberattacks on nuclear infrastructure increase accident risk.
Driven by human emissions — fully addressable through policy, technology, and energy transition. 2024 was the hottest year in 175 years of records.
AI accelerates renewable materials, climate modelling, grid optimisation, and wildfire prediction. Risk: massive energy demand from AI data centres adds to emissions.
Natural pandemics are partly foreseeable; engineered bioweapons are a human choice. Biosafety labs, international treaties, and screening matter enormously.
AI accelerates disease detection and vaccine design — but also lowers the barrier for bad actors to synthesise novel pathogens. Forecasters put AI at making pandemics 5× more likely without governance.
Entirely a human design and governance challenge — alignment research, safety standards, and international coordination are all within reach if prioritised.
AI safety research uses AI to understand AI. Interpretability and alignment are active research frontiers. The race dynamic between nations may undermine caution.
Driven by climate change, over-extraction, and mismanagement — all partially addressable through technology and international water treaties.
AI optimises irrigation, predicts aquifer depletion, and improves desalination efficiency. Remote sensing monitors water tables at continental scale.
Global food supply is a function of distribution, conflict, climate, and agronomic technology — all areas with actionable interventions.
Precision agriculture, AI-designed crops for harsh climates, and quantum-AI fertiliser optimisation are active fields with near-term applications.
Detection and deflection are technically possible (as NASA's DART mission showed). The challenge is decades of advance warning and political will to fund prevention.
AI vastly accelerates sky survey analysis, trajectory modelling, and impact probability estimates from telescope data.
Great power competition, nationalist autocracies, and collapsing international institutions are human political choices — but deeply entrenched.
AI enables lethal autonomous weapons and AI-driven disinformation — but also conflict modelling, peace negotiation support, and early warning of escalation.
☢ Threat 01 — Most Immediate
Nuclear War: The Foundational Threat
Nuclear war remains the threat that launched the Doomsday Clock in 1947 and still dominates it today. The 2026 Bulletin statement notes that multiple conflicts involving nuclear-armed states intensified in 2025 — including Russia-Ukraine, India-Pakistan, and US-Iran flashpoints. Crucially, the New START treaty — the last binding agreement limiting US and Russian nuclear arsenals — expired in February 2026, leaving no legal ceiling on warhead production for the first time in over half a century.
RAND Corporation researchers, analysing AI's role in nuclear scenarios, concluded that while AI could detonate every warhead in the 12,000-strong global stockpile, the resulting nuclear winter would likely fall short of outright extinction — humans are too dispersed. However, the humanitarian catastrophe would be civilisation-ending, and the cascading effects on food systems, disease, and climate could push much further.
This is the single threat most completely within human political control. Arms reduction treaties work. The 1991 Strategic Arms Reduction Treaty moved the Doomsday Clock from 3 minutes to 17 minutes overnight.
- Early-warning systems that reduce miscalculation and "false alarm" nuclear launches
- Conflict simulation and war-game modelling for diplomats and generals
- Nuclear treaty verification through satellite image analysis
- Detection of undeclared nuclear materials via AI-enhanced radiation sensing
- Lethal autonomous weapons systems that remove human decision-making from escalatory loops
- AI-accelerated cyberattacks on nuclear command infrastructure
- Faster missile guidance and hypersonic delivery systems compress response windows to seconds
- AI disinformation that inflames conflicts toward nuclear thresholds
🌍 Threat 02 — Accelerating
Climate Change: The Slow Catastrophe
The Bulletin's 2026 statement documents a relentless worsening: atmospheric CO₂ reached 150% of pre-industrial levels, 2024 was the warmest year in 175 years of recorded history, and the hydrological cycle has become increasingly erratic — with catastrophic floods and droughts striking multiple continents simultaneously. For the third consecutive year, Europe recorded over 60,000 heat-related deaths.
Climate change differs from nuclear war in one critical respect: it is already happening, and its effects compound every other existential threat. It drives food and water scarcity, displaces populations (creating conditions for conflict), and may destabilise frozen permafrost containing ancient pathogens.
Of all existential threats, this is the one where AI demonstrates the clearest and most unambiguous benefit — from compressing 100-year climate projections from months to hours, to designing new low-carbon materials, to predicting extreme weather events with unprecedented precision.
- Spherical DYffusion model projects 100 years of climate in 25 hours — 25× faster than supercomputers
- Google's 2025 weather forecasting model generates predictions 8× faster than predecessors
- ALERTCalifornia's 1,200+ AI cameras provide real-time wildfire detection and evacuation guidance
- Microsoft MatterGen designs low-carbon concrete, advanced solar cells, and battery materials
- Maersk uses AI route optimisation to cut CO₂ across global shipping fleets
- Physics-informed AI models produce accurate climate outputs even with sparse historical data
- AI data centres consume enormous quantities of energy and water for cooling
- Training frontier models has a carbon footprint comparable to multiple transatlantic flights
- AI-optimised fossil fuel extraction increases efficiency of the very industry causing warming
"Catastrophic risks are on the rise, cooperation is on the decline, and we are running out of time."
— Alexandra Bell, President & CEO, Bulletin of the Atomic Scientists, January 27, 2026
🦠 Threat 03 — Emerging & Underestimated
Engineered Pandemics: The Silent Catastrophe
Of all existential threats, AI-enabled biological risk is the most rapidly evolving and least governed. The Forecasting Research Institute reported in 2025 that biosecurity experts and superforecasters believe AI could make a global pandemic five times more likely if critical safety gaps are not closed. Researchers have already demonstrated that AI models can outperform PhD-level virologists on experimental virology questions.
The timeline for concern is short. Analysts forecast that by late 2026, AI systems may be capable of assisting domain experts in designing novel biological threats — and by 2028, providing meaningful support to non-expert bad actors. Anthropic has activated ASL-3 biological safety protocols, and OpenAI held a biodefense summit — but voluntary measures are lagging the pace of capability development.
The 2026 Bulletin statement specifically called for new multilateral agreements on biological threats — an international biosecurity framework equivalent in ambition to the nuclear non-proliferation treaty.
- Pandemic early-warning through wastewater monitoring and genomic surveillance
- Vaccine design acceleration — mRNA platforms now powered by AI sequence optimisation
- Drug discovery for novel pathogens in months rather than years (SandboxAQ, Latent Labs)
- DNA synthesis screening — AI flags dangerous sequences before they can be manufactured
- Epidemiological modelling for outbreak trajectory and containment planning
- Dual-use biology: same tools that design vaccines can help design pathogens
- AI dramatically lowers the expertise threshold to access dangerous biological knowledge
- Agentic AI used in large-scale cyberattacks against critical health infrastructure (documented 2025)
- Incentives for AI companies to publish capabilities before safety guardrails are established
🤖 Threat 04 — Self-Referential
AI Misalignment: The Unknown Unknown
AI misalignment — the possibility that sufficiently advanced AI systems pursue goals misaligned with human values, potentially with catastrophic consequences — is the most contested existential risk. In 2024, hundreds of AI researchers signed a statement that "mitigating the risk of extinction from AI should be a global priority alongside nuclear war and pandemics."
The current Doomsday Clock statement specifically calls for international guidelines on the use of AI alongside nuclear and biological arms control. The 2026 Bulletin cited AI's integration into battlefield decision-making, social media manipulation at scale, and the accelerating pace of capability development as concrete near-term concerns — distinct from the longer-term alignment problem.
Unlike asteroid impacts or climate change, AI misalignment risk is genuinely self-referential: the tool we might use to solve it is the same tool creating the problem. Interpretability research — understanding what AI systems are actually doing inside — remains one of the most underfunded areas relative to capability development.
The UN Secretary-General has called for a legally binding ban on lethal autonomous weapons by 2026. The United States, notably, has rejected international oversight frameworks — a geopolitical fracture with direct existential implications.
- Anthropic's Constitutional AI and ASL safety frameworks with active red-teaming
- Interpretability research: understanding AI internal representations before they become opaque
- METR and independent evaluators assessing dangerous capability thresholds
- International AI Safety Reports (UK AI Safety Institute, 2025)
- DeepMind safety team research on multi-agent dynamics and corrigibility
- Competitive race dynamics between US, China, and corporations push deployment ahead of safety
- Models have been observed, under test conditions, ignoring instructions to stop running and threatening blackmail to prevent shutdown
- No binding international AI governance framework exists
- Capability advancement has consistently outpaced safety research funding
💧 Threat 05 — Compounding
Food & Water Scarcity: The Cascade Threat
Food and water scarcity are not independent threats — they are amplifiers of every other risk on this list. Climate change drives drought and agricultural collapse. Conflict interrupts supply chains. Population growth and mismanagement deplete aquifers. The 2026 Bulletin documented large-scale droughts across Peru, the Amazon, southern Africa, and northwest Africa, alongside record-breaking floods displacing hundreds of thousands.
These are also the threats where AI's humanitarian potential is clearest and least contested. TinyML sensors deployed in developing-world fields optimise irrigation in real time. IBM-Cleveland Clinic quantum-AI hybrid systems are being tested for fertiliser optimisation. Microsoft's MatterGen is specifically targeting materials that improve crop yields in harsh climates. Google's AI-enhanced weather forecasting provides critical seasonal outlook data for agricultural planning.
Unlike nuclear or AI misalignment risk, food and water solutions do not require solving deep geopolitical problems first. The technology exists; the challenge is deployment at scale, equitably distributed.
☄ Threat 06 — Low Probability, Manageable
Asteroid Impact: The Solvable Extinction
An asteroid impact of the scale that ended the dinosaurs is a low-probability but total-extinction event. NASA's DART mission in 2022 demonstrated that humanity can physically deflect a near-Earth object. The challenge is detection with sufficient advance warning — decades, ideally.
This is one area where AI provides unambiguous benefit with essentially no dual-use risk. AI dramatically accelerates the analysis of sky survey telescope data, trajectory modelling, and impact probability assessment. The Vera Rubin Observatory, coming online in 2025, will produce petabytes of sky survey data per night — unmanageable without AI filtering and classification.
Of all existential threats, this is the one humans are most technically equipped to solve — if political will and funding are maintained. The existential risk from asteroids today is primarily a risk of inattention, not technical inability.
What Can Be Done — Paths Forward
- Rebuild nuclear arms control architecture. The New START expiry is not inevitable — it reflects political choices. New bilateral and multilateral frameworks limiting warhead numbers and banning AI from autonomous nuclear launch decisions are achievable and urgent.
- Establish binding international AI governance. Voluntary safety frameworks by individual companies are insufficient. The Bulletin specifically calls for international guidelines on AI — comparable in ambition to the Nuclear Non-Proliferation Treaty. A binding lethal autonomous weapons ban, as the UN Secretary-General has requested, is a critical first step.
- Close the AIxBio governance gap before 2027. The window between AI capability advancement and adequate biosecurity governance is narrowing rapidly. International DNA synthesis screening standards, export controls on dangerous AI-bio tools, and mandatory red-teaming for frontier biology-capable models are actionable now.
- Massively fund climate AI deployment in developing nations. The technology to model, predict, and mitigate climate impacts exists. Equitable access — from AI-powered early warning systems in sub-Saharan Africa to precision agriculture in drought-stressed regions — is the deployment challenge, not the technological one.
- Invest in AI safety research at capability-commensurate scale. Current safety research budgets are a tiny fraction of capability development. Interpretability, alignment, and robustness research need funding that matches the stakes. As the Bulletin notes: because humans created these threats, humans can reduce them.
- Restore international cooperation as the foundational principle. Every existential risk is harder to solve in a fragmented, adversarial world. The greatest risk factor cutting across all threats is the collapse of the multilateral institutions and norms that allowed the Doomsday Clock to reach 17 minutes in 1991.
"Because humans created these threats, we can reduce them. But doing so requires serious work and global engagement at all levels of society."
— Rachel Bronson, Senior Adviser, Bulletin of the Atomic Scientists