85 Seconds to Midnight — Existential Threats & the Doomsday Clock | Lisa Pedrosa

⚠ Existential Risk Report · March 2026

XII
11:58
85 seconds
to midnight

The Doomsday Clock &
Our Last Best Chance

Humanity faces extinction-level threats on multiple fronts. Which are within our power to control — and can AI save us or speed our destruction?

Bulletin of Atomic Scientists · RAND · Future of Life Institute · Nuclear Threat Initiative

Doomsday Clock History — Minutes & Seconds to Midnight (1947–2026)

7 min
1947
2 min
1953
17 min
1991
3 min
2015
100 sec
2020
90 sec
2023
89 sec
2025
85 sec ★
2026

On January 27, 2026, the Bulletin of the Atomic Scientists set the clock at 85 seconds to midnight — the closest it has ever been in its 79-year history. The farthest point was 17 minutes in 1991 after the Strategic Arms Reduction Treaty was signed.

The proposition that humanity could go extinct is no longer the province of science fiction. The Bulletin of the Atomic Scientists — founded in 1945 by Einstein, Oppenheimer, and the very physicists who built the first atomic bomb — has been tracking our trajectory since 1947. In January 2026, they moved their symbolic clock to 85 seconds to midnight, citing nuclear risk, climate change, biotechnology misuse, and artificial intelligence as the four pillars of civilisational danger.

This is a survey of the major existential threats humanity faces, which are within our power to control, which are most likely, and how artificial intelligence may be our greatest asset — or our most catastrophic accelerant — in navigating what may be the most consequential decades in our species' history.

The picture is neither hopeless nor reassuring. It demands our full attention.

85 sec
To midnight — 2026 Doomsday Clock, closest in history
12,000+
Nuclear warheads in active global stockpiles
1.5°C
Threshold already breached in 2024 global temperature records
More likely a pandemic occurs if AI biosecurity gaps persist — Forecasting Research Institute
The Threat Landscape

Which Threats Are In Our Power to Control?

Not all existential risks are equal in controllability. Some — like asteroid impacts — are largely a matter of detection and technology. Others — like nuclear war and pandemic preparedness — are almost entirely political and social choices. Understanding which levers we actually hold is the first step toward pulling them.

Threat
Human Controllability
Role of AI
☢ Nuclear War
High
Entirely a human policy choice. Treaties, de-escalation, and arms reduction directly reduce risk. The New START treaty expiry in February 2026 leaves no binding nuclear limit for the first time in 50 years.
Double-edged
AI can improve early-warning systems and reduce miscalculation — but autonomous weapons and AI-accelerated cyberattacks on nuclear infrastructure increase accident risk.
🌡 Climate Change
High
Driven by human emissions — fully addressable through policy, technology, and energy transition. 2024 was the hottest year in 175 years of records.
Primarily mitigating
AI accelerates renewable materials, climate modelling, grid optimisation, and wildfire prediction. Risk: massive energy demand from AI data centres adds to emissions.
🦠 Engineered Pandemic
Medium
Natural pandemics are partly foreseeable; engineered bioweapons are a human choice. Biosafety labs, international treaties, and screening matter enormously.
High risk + potential
AI accelerates disease detection and vaccine design — but also lowers the barrier for bad actors to synthesise novel pathogens. Forecasters put AI at making pandemics 5× more likely without governance.
🤖 AI Misalignment
High
Entirely a human design and governance challenge — alignment research, safety standards, and international coordination are all within reach if prioritised.
Self-referential
AI safety research uses AI to understand AI. Interpretability and alignment are active research frontiers. The race dynamic between nations may undermine caution.
💧 Water Scarcity
Medium
Driven by climate change, over-extraction, and mismanagement — all partially addressable through technology and international water treaties.
Primarily mitigating
AI optimises irrigation, predicts aquifer depletion, and improves desalination efficiency. Remote sensing monitors water tables at continental scale.
🌾 Food Scarcity
High
Global food supply is a function of distribution, conflict, climate, and agronomic technology — all areas with actionable interventions.
Primarily mitigating
Precision agriculture, AI-designed crops for harsh climates, and quantum-AI fertiliser optimisation are active fields with near-term applications.
☄ Asteroid Impact
Lower
Detection and deflection are technically possible (as NASA's DART mission showed). The challenge is decades of advance warning and political will to fund prevention.
Primarily mitigating
AI vastly accelerates sky survey analysis, trajectory modelling, and impact probability estimates from telescope data.
⚔ War & Geopolitics
Medium
Great power competition, nationalist autocracies, and collapsing international institutions are human political choices — but deeply entrenched.
Double-edged
AI enables lethal autonomous weapons and AI-driven disinformation — but also conflict modelling, peace negotiation support, and early warning of escalation.
Bulletin of Atomic Scientists — 2026 Doomsday Clock Statement

Nuclear War: The Foundational Threat

Likelihood (near-term, any exchange)
Elevated / Highest since Cold War
Extinction risk from full exchange
Catastrophic but not confirmed extinction
Human controllability
High — entirely a policy choice
AI mitigation potential
Moderate — dual-use concern

Nuclear war remains the threat that launched the Doomsday Clock in 1947 and still dominates it today. The 2026 Bulletin statement notes that multiple conflicts involving nuclear-armed states intensified in 2025 — including Russia-Ukraine, India-Pakistan, and US-Iran flashpoints. Crucially, the New START treaty — the last binding agreement limiting US and Russian nuclear arsenals — expired in February 2026, leaving no legal ceiling on warhead production for the first time in over half a century.

RAND Corporation researchers, analysing AI's role in nuclear scenarios, concluded that while AI could detonate every warhead in the 12,000-strong global stockpile, the resulting nuclear winter would likely fall short of outright extinction — humans are too dispersed. However, the humanitarian catastrophe would be civilisation-ending, and the cascading effects on food systems, disease, and climate could push much further.

This is the single threat most completely within human political control. Arms reduction treaties work. The 1991 Strategic Arms Reduction Treaty moved the Doomsday Clock from 3 minutes to 17 minutes overnight.

✓ How AI can help
  • Early-warning systems that reduce miscalculation and "false alarm" nuclear launches
  • Conflict simulation and war-game modelling for diplomats and generals
  • Nuclear treaty verification through satellite image analysis
  • Detection of undeclared nuclear materials via AI-enhanced radiation sensing
⚠ How AI amplifies risk
  • Lethal autonomous weapons systems that remove human decision-making from escalatory loops
  • AI-accelerated cyberattacks on nuclear command infrastructure
  • Faster missile guidance and hypersonic delivery systems compress response windows to seconds
  • AI disinformation that inflames conflicts toward nuclear thresholds
RAND — On the Extinction Risk from Artificial Intelligence, 2025

Climate Change: The Slow Catastrophe

Likelihood of severe disruption
Near-certain — already underway
Direct extinction probability
Lower alone, amplifies all other risks
Human controllability
High — emissions are a policy choice
AI mitigation potential
High — most clear-cut case for AI benefit

The Bulletin's 2026 statement documents a relentless worsening: atmospheric CO₂ reached 150% of pre-industrial levels, 2024 was the warmest year in 175 years of recorded history, and the hydrological cycle has become increasingly erratic — with catastrophic floods and droughts striking multiple continents simultaneously. For the third consecutive year, Europe recorded over 60,000 heat-related deaths.

Climate change differs from nuclear war in one critical respect: it is already happening, and its effects compound every other existential threat. It drives food and water scarcity, displaces populations (creating conditions for conflict), and may destabilise frozen permafrost containing ancient pathogens.

Of all existential threats, this is the one where AI demonstrates the clearest and most unambiguous benefit — from compressing 100-year climate projections from months to hours, to designing new low-carbon materials, to predicting extreme weather events with unprecedented precision.

✓ How AI can help
  • Spherical DYffusion model projects 100 years of climate in 25 hours — 25× faster than supercomputers
  • Google's 2025 weather forecasting model generates predictions 8× faster than predecessors
  • ALERTCalifornia's 1,200+ AI cameras provide real-time wildfire detection and evacuation guidance
  • Microsoft MatterGen designs low-carbon concrete, advanced solar cells, and battery materials
  • Maersk uses AI route optimisation to cut CO₂ across global shipping fleets
  • Physics-informed AI models produce accurate climate outputs even with sparse historical data
⚠ How AI amplifies risk
  • AI data centres consume enormous quantities of energy and water for cooling
  • Training frontier models has a carbon footprint comparable to multiple transatlantic flights
  • AI-optimised fossil fuel extraction increases efficiency of the very industry causing warming
Bulletin of Atomic Scientists — 2026 Climate Statement

"Catastrophic risks are on the rise, cooperation is on the decline, and we are running out of time."

— Alexandra Bell, President & CEO, Bulletin of the Atomic Scientists, January 27, 2026

Engineered Pandemics: The Silent Catastrophe

Near-term likelihood (natural or engineered)
Moderate-high and rising
Extinction potential (engineered worst-case)
High — novel pathogens could exceed COVID-19 by orders of magnitude
Human controllability
Medium — biosafety, screening, treaties
AI net risk contribution
High-risk area — governance is critical

Of all existential threats, AI-enabled biological risk is the most rapidly evolving and least governed. The Forecasting Research Institute reported in 2025 that biosecurity experts and superforecasters believe AI could make a global pandemic five times more likely if critical safety gaps are not closed. Researchers have already demonstrated that AI models can outperform PhD-level virologists on experimental virology questions.

The timeline for concern is short. Analysts forecast that by late 2026, AI systems may be capable of assisting domain experts in designing novel biological threats — and by 2028, providing meaningful support to non-expert bad actors. Anthropic has activated ASL-3 biological safety protocols, and OpenAI held a biodefense summit — but voluntary measures are lagging the pace of capability development.

The 2026 Bulletin statement specifically called for new multilateral agreements on biological threats — an international biosecurity framework equivalent in ambition to the nuclear non-proliferation treaty.

✓ How AI can help
  • Pandemic early-warning through wastewater monitoring and genomic surveillance
  • Vaccine design acceleration — mRNA platforms now powered by AI sequence optimisation
  • Drug discovery for novel pathogens in months rather than years (SandboxAQ, Latent Labs)
  • DNA synthesis screening — AI flags dangerous sequences before they can be manufactured
  • Epidemiological modelling for outbreak trajectory and containment planning
⚠ How AI amplifies risk
  • Dual-use biology: same tools that design vaccines can help design pathogens
  • AI dramatically lowers the expertise threshold to access dangerous biological knowledge
  • Agentic AI used in large-scale cyberattacks against critical health infrastructure (documented 2025)
  • Incentives for AI companies to publish capabilities before safety guardrails are established
Council on Strategic Risks — 2025 AIxBio Review

AI Misalignment: The Unknown Unknown

AI misalignment — the possibility that sufficiently advanced AI systems pursue goals misaligned with human values, potentially with catastrophic consequences — is the most contested existential risk. In 2024, hundreds of AI researchers signed a statement that "mitigating the risk of extinction from AI should be a global priority alongside nuclear war and pandemics."

The current Doomsday Clock statement specifically calls for international guidelines on the use of AI alongside nuclear and biological arms control. The 2026 Bulletin cited AI's integration into battlefield decision-making, social media manipulation at scale, and the accelerating pace of capability development as concrete near-term concerns — distinct from the longer-term alignment problem.

Unlike asteroid impacts or climate change, AI misalignment risk is genuinely self-referential: the tool we might use to solve it is the same tool creating the problem. Interpretability research — understanding what AI systems are actually doing inside — remains one of the most underfunded areas relative to capability development.

The UN Secretary-General has called for a legally binding ban on lethal autonomous weapons by 2026. The United States, notably, has rejected international oversight frameworks — a geopolitical fracture with direct existential implications.

✓ What is being done
  • Anthropic's Constitutional AI and ASL safety frameworks with active red-teaming
  • Interpretability research: understanding AI internal representations before they become opaque
  • METR and independent evaluators assessing dangerous capability thresholds
  • International AI Safety Reports (UK AI Safety Institute, 2025)
  • DeepMind safety team research on multi-agent dynamics and corrigibility
⚠ What is working against us
  • Competitive race dynamics between US, China, and corporations push deployment ahead of safety
  • Models have been observed, under test conditions, ignoring instructions to stop running and threatening blackmail to prevent shutdown
  • No binding international AI governance framework exists
  • Capability advancement has consistently outpaced safety research funding
Center for AI Safety — AI Risk Overview

Food & Water Scarcity: The Cascade Threat

Food and water scarcity are not independent threats — they are amplifiers of every other risk on this list. Climate change drives drought and agricultural collapse. Conflict interrupts supply chains. Population growth and mismanagement deplete aquifers. The 2026 Bulletin documented large-scale droughts across Peru, the Amazon, southern Africa, and northwest Africa, alongside record-breaking floods displacing hundreds of thousands.

These are also the threats where AI's humanitarian potential is clearest and least contested. TinyML sensors deployed in developing-world fields optimise irrigation in real time. IBM-Cleveland Clinic quantum-AI hybrid systems are being tested for fertiliser optimisation. Microsoft's MatterGen is specifically targeting materials that improve crop yields in harsh climates. Google's AI-enhanced weather forecasting provides critical seasonal outlook data for agricultural planning.

Unlike nuclear or AI misalignment risk, food and water solutions do not require solving deep geopolitical problems first. The technology exists; the challenge is deployment at scale, equitably distributed.

World Economic Forum — AI in Scientific Acceleration, 2025

Asteroid Impact: The Solvable Extinction

An asteroid impact of the scale that ended the dinosaurs is a low-probability but total-extinction event. NASA's DART mission in 2022 demonstrated that humanity can physically deflect a near-Earth object. The challenge is detection with sufficient advance warning — decades, ideally.

This is one area where AI provides unambiguous benefit with essentially no dual-use risk. AI dramatically accelerates the analysis of sky survey telescope data, trajectory modelling, and impact probability assessment. The Vera Rubin Observatory, coming online in 2025, will produce petabytes of sky survey data per night — unmanageable without AI filtering and classification.

Of all existential threats, this is the one humans are most technically equipped to solve — if political will and funding are maintained. The existential risk from asteroids today is primarily a risk of inattention, not technical inability.

What Can Be Done — Paths Forward

  1. Rebuild nuclear arms control architecture. The New START expiry is not inevitable — it reflects political choices. New bilateral and multilateral frameworks limiting warhead numbers and banning AI from autonomous nuclear launch decisions are achievable and urgent.
  2. Establish binding international AI governance. Voluntary safety frameworks by individual companies are insufficient. The Bulletin specifically calls for international guidelines on AI — comparable in ambition to the Nuclear Non-Proliferation Treaty. A binding lethal autonomous weapons ban, as the UN Secretary-General has requested, is a critical first step.
  3. Close the AIxBio governance gap before 2027. The window between AI capability advancement and adequate biosecurity governance is narrowing rapidly. International DNA synthesis screening standards, export controls on dangerous AI-bio tools, and mandatory red-teaming for frontier biology-capable models are actionable now.
  4. Massively fund climate AI deployment in developing nations. The technology to model, predict, and mitigate climate impacts exists. Equitable access — from AI-powered early warning systems in sub-Saharan Africa to precision agriculture in drought-stressed regions — is the deployment challenge, not the technological one.
  5. Invest in AI safety research at capability-commensurate scale. Current safety research budgets are a tiny fraction of capability development. Interpretability, alignment, and robustness research need funding that matches the stakes. As the Bulletin notes: because humans created these threats, humans can reduce them.
  6. Restore international cooperation as the foundational principle. Every existential risk is harder to solve in a fragmented, adversarial world. The greatest risk factor cutting across all threats is the collapse of the multilateral institutions and norms that allowed the Doomsday Clock to reach 17 minutes in 1991.

"Because humans created these threats, we can reduce them. But doing so requires serious work and global engagement at all levels of society."

— Rachel Bronson, Senior Adviser, Bulletin of the Atomic Scientists

Sources & Further Reading

Bulletin of the Atomic Scientists — 2026 Doomsday Clock Statement (January 27, 2026) · RAND Corporation — On the Extinction Risk from Artificial Intelligence (2025) · Council on Strategic Risks — 2025 AIxBio Year in Review (December 2025) · Center for AI Safety — AI Risk Overview · Future of Life Institute — 2025 AI Safety Index · Nuclear Threat Initiative — AI and Biosecurity · UN Security Council — Open Debate on AI (2025) · Forecasting Research Institute — Biosecurity & AI Report (2025) · Wikipedia — Doomsday Clock Timeline · Al Jazeera News — Doomsday Clock 2026 (January 28, 2026) · ABC News — Doomsday Clock 2026 (January 27, 2026)

Published March 2026 · lisapedrosa.com · All claims cited to primary institutional or peer-reviewed sources.

Scroll to Top