Imagine waking up to an internet that flickers in and out, card payments that suddenly fail, ambulances dispatched to the wrong addresses, and emergency alerts you are no longer sure are real. Whether triggered by a system failure, criminal misuse, or a cascading cyber incident, an AI-driven crisis could spread across borders in hours—not days.
The earliest signs of such an emergency would likely look familiar: a routine outage, a data breach, a technical glitch. Only later—if ever—might it become clear that AI systems played a central role. By then, the damage to trust, safety, and coordination could already be done.
Governments and companies have begun building safeguards to reduce AI risks. The EU AI Act, the U.S. National Institute of Standards and Technology (NIST) risk framework, the G7 Hiroshima process, and international technical standards all focus on prevention. Cybersecurity agencies and infrastructure operators also maintain playbooks for hacks, outages, and system failures.
What’s missing is not the technical ability to restore servers or patch networks. What’s missing is a plan for managing panic, misinformation, diplomatic breakdowns, and the collapse of public trust when AI sits at the center of a fast-moving crisis.
Prevention Is Not Enough
Preventing AI failures is only half the job. The other half—largely absent from today’s AI governance debates—is preparedness and response.
Who decides when an AI incident becomes an international emergency?
Who communicates with the public when false messages flood social media and official channels are compromised?
Who keeps lines open between governments when normal diplomatic or technical systems fail?
Governments already have many of the legal tools they need. What they lack is agreement on how and when to use them. We do not need to invent new, complex institutions to govern AI in emergencies—we need governments to plan ahead and coordinate what already exists.
What an AI Emergency Playbook Should Look Like
We have faced similar challenges before. Global health emergencies, nuclear accidents, telecommunications failures, and cybercrime have all led to international agreements that prioritise speed, clarity, and coordination.
The lessons are consistent: pre-agreed triggers, named coordinators, and fast, trusted communication channels save lives.
An AI emergency framework should rest on the same foundations.
Start with a shared definition
An AI emergency should be understood as an extraordinary event caused by the development, use, or malfunction of AI that risks severe cross-border harm and exceeds any single country’s capacity to respond. Crucially, this definition must include situations where AI involvement is only suspected, not conclusively proven. Waiting for forensic certainty during the first critical hours could be catastrophic.
Create a practical, operational playbook.
This should include:
- A common set of triggers and a severity scale to guide escalation from routine incident to international alert
- Criteria for acting when AI involvement is credible but unconfirmed
- A designated global coordinator able to convene governments, technical experts, law enforcement, and disaster-response specialists at short notice
- Interoperable incident-reporting systems that allow essential information to be shared in minutes, not days
- Crisis communication protocols that rely on authenticated and resilient channels, including analogue options like radio
- Clear continuity and containment measures, such as slowing high-risk AI services or shifting critical infrastructure to manual control
Why the United Nations Should Play a Central Role
AI emergencies will not respect borders, alliances, or geopolitical blocs. Anchoring preparedness within the United Nations offers several advantages.
It allows for broader inclusion and avoids duplication among competing coalitions. It provides support to countries without advanced AI capabilities, ensuring the burden does not fall solely on a handful of major powers. Most importantly, it adds legitimacy and restraint. Any extraordinary emergency powers affecting digital systems used by billions must be lawful, proportionate, and subject to review.
International coordination must also be matched by domestic action. Governments can act now by:
- Appointing a 24/7 national AI emergency contact point
- Reviewing emergency powers to ensure they cover AI and digital infrastructure
- Aligning sector-specific plans with basic incident management and business continuity standards
- Running joint exercises that simulate disinformation waves, model failures, and cross-sector outages
- Accelerating migration to post-quantum cryptography before a hostile incident forces it
- Registering trusted senders and pre-approved alert templates so messages can reach citizens even when systems are unstable
The Time to Prepare Is Now
AI-related cyber incidents are already rising. Many countries have experienced smaller-scale outages, data manipulation attempts, and disinformation surges that hint at what a larger crisis could look like. In a hyper-connected world, a fast-moving AI failure could quickly overwhelm any single nation.
This is not a call for a new global super-agency. It is a call to connect the tools we already have into a coherent, rehearsed response.
The true test of AI governance will come on our worst day—not our best. At present, the world has no plan for an AI emergency. We can change that, but only if we build it now, test it, and anchor it in law with clear safeguards.
Once the next crisis begins, it will already be too late.
Read more:Â Join the Dell x NVIDIA Developer Meetup in Hyderabad







