The Coming Wave: AI, Containment & Mustafa Suleyman’s Risk Framework

Hall of AI Legends - Journey Through Tech with Visionaries and Innovation

Share This Post

The Coming Wave: AI, Containment & Mustafa Suleyman’s Risk Framework

In an age increasingly shaped by emergent intelligence and exponential technological acceleration, the boundary between innovation and existential risk is eroding at a rate few policymakers are prepared to manage. Artificial intelligence no longer exists in isolated research labs—it now permeates every layer of society, influencing how we diagnose disease, wage war, surveil populations, automate decisions, and even conduct diplomacy. The problem isn’t that AI is arriving too slowly—it’s that it’s arriving too fast, outpacing the very legal, ethical, and institutional systems meant to guide its use.

This collision between omni-use capability and global unpreparedness is the central theme of The Coming Wave: Technology, Power, and the Twenty-First Century’s Greatest Dilemma (2023), a policy-forward manifesto by Mustafa Suleyman, co-founder of DeepMind and current CEO of Microsoft AI. Rather than simply offering a cautionary tale, Suleyman introduces a strategic, multi-layered framework for containment—a term he uses not to suggest technological suppression, but strategic governance to prevent irreversible harm.

Suleyman argues that society is facing a “containment problem” akin to nuclear proliferation, but broader, faster, and harder to isolate. AI and other transformative technologies are not just tools—they are force multipliers capable of reshaping the entire global order. In his view, the challenge of the 21st century is not only to harness these tools for good but to ensure they do not spiral beyond our control.

In this blog, we unpack the key ideas of The Coming Wave, examine specific risk domains posed by uncontained AI, break down Suleyman’s proposed containment framework, and weigh the central debate: How do we protect civilization without paralyzing innovation?

This piece will explore:

  • The core ideas of Suleyman’s book, including proliferation, omni-use, and the containment dilemma
  • Concrete examples of near-term AI risks, including biotech manipulation, surveillance autocracies, and autonomous warfare
  • The architecture of Suleyman’s containment framework, including regulatory tiers, safety audits, and international treaties
  • The ongoing debate between innovation advocates and regulatory realists
  • A concluding call to action for cross-sector alignment on AI governance before the coming wave crests

Suleyman’s argument is not anti-technology—it is pro-responsibility. The coming wave, he argues, is not inherently dangerous. What makes it dangerous is our failure to prepare for its consequences with the seriousness it demands. The task now is to meet acceleration with coordination. And fast.

Core Principles from The Coming Wave

At the heart of The Coming Wave lies a strategic philosophy that breaks from the techno-optimism that has long characterized Silicon Valley. Mustafa Suleyman’s approach is both pragmatic and urgent: we are entering a new technological era not defined by linear progress, but by compounding risk. His framework is built on three interdependent principles—each describing a structural reality that policymakers, companies, and citizens can no longer afford to ignore. Together, they define why our current governance models are inadequate, and what must be done to avoid catastrophe.

1. Proliferation Is Inevitable

Suleyman’s first—and perhaps most sobering—principle is that the spread of powerful technologies cannot be stopped. In contrast to Cold War-era nuclear deterrence, today’s emergent technologies like AI, CRISPR-based gene editing, and synthetic biology are:

  • Cheap to replicate: Unlike nuclear weapons, which require rare isotopes and nation-state capabilities, these technologies can be deployed using commercially available cloud infrastructure and open-source codebases.
  • Economically incentivized: Market forces are aligned toward their rapid adoption. AI can improve profit margins, optimize logistics, generate content, and reduce labor costs—all irresistible drivers in a competitive economy.
  • Digitally distributed: Code, once written, can be copied endlessly. Foundational AI models and genomic editing tools can be shared, adapted, and improved globally without centralized oversight.

Suleyman likens this reality to a loss of friction in the system. Innovation can now outpace control, not by months—but by orders of magnitude. Attempts to ban or slow the proliferation of these tools are not only impractical, but in some cases, counterproductive. “We cannot uninvent AI,” Suleyman writes. “We can only choose how responsibly we deploy and contain it.”

He argues that this democratization of power is not inherently bad—but it becomes catastrophic when combined with the next principle: omni-use capacity.

2. Omni-Use Technologies Are Inherently Risky

Omni-use refers to the dual-edged nature of emerging technologies: they can be used to dramatically improve human life—or to destabilize it. The same AI system that generates personalized education curricula can be retooled to automate disinformation campaigns, manipulate elections, or generate synthetic bioterrorism blueprints.

Suleyman frames this omni-use potential not as a philosophical concern, but as the defining operational risk of the 21st century. Key examples include:

  • AI for surveillance and repression: Advanced facial recognition and predictive policing software can improve public safety—or enable totalitarian control at an unprecedented scale, as seen in authoritarian regimes.
  • Synthetic biology tools: CRISPR gene editing can be used to eliminate genetic disorders—or to engineer entirely new classes of pathogens.
  • Autonomous weapons: Drones and AI-enabled targeting systems can reduce military casualties—or become rogue agents of violence without human oversight.

In a world of omni-use, intent is no longer enough. Capabilities themselves become threats, because their misuse is not just theoretically possible—it’s increasingly likely. What makes omni-use so dangerous is that its duality is invisible until it’s too late. The very same tools we celebrate in medicine or education can, without strong controls, be redirected toward catastrophic ends with minimal friction.

This principle underscores Suleyman’s call for a radical change in mindset: emerging technologies must be treated as inherently ambivalent, not inherently good.

3. Containment Must Be Designed Early

While many governments and institutions default to reactive policymaking—responding to crises only after they materialize—Suleyman argues this approach is a formula for disaster. His third principle is a clear imperative: containment must be built into emerging technology systems from the outset.

Containment, as Suleyman defines it, is not about halting progress. It’s about designing durable governance architecture that keeps transformative technologies aligned with human values and collective safety. This framework is multi-tiered, incorporating:

Technical Mechanisms

These are tools built into AI and other systems that ensure they behave within known, predictable, and safe boundaries. Examples include:

  • Interpretability tools that help engineers and regulators understand why a model behaves the way it does
  • Red-teaming and adversarial testing to proactively identify vulnerabilities
  • Constitutional AI principles embedded directly in the model to prevent unsafe outputs
  • Hard-coded constraints or sandboxing to prevent misuse of powerful features

Institutional Controls

Suleyman argues that containment cannot succeed without robust institutional frameworks that oversee, audit, and intervene when necessary. This includes:

  • National and international AI safety boards, with authority to pause or review high-risk systems
  • Public sector audits of large-scale deployments in healthcare, education, and law enforcement
  • Mandated impact assessments for new models before release
  • Interdisciplinary ethics councils embedded within major AI labs

Policy Guardrails

Governance must also include enforceable legal and diplomatic measures. Suleyman advocates for:

  • Licensing regimes for access to powerful models or compute clusters
  • Global treaties that mirror nuclear non-proliferation agreements, tailored for AI and biotech
  • Kill-switch protocols that can deactivate models that breach certain thresholds
  • Transparency laws for AI-generated content, including watermarking and source disclosure

The containment model urges one unifying principle: governance must evolve in tandem with capability. Waiting for disaster—as happened with climate change or social media manipulation—would be a tragic repetition of unlearned lessons.

Real-World AI Risks: A Cross-Sector Scan

Unlike many futurists who speak in vague hypotheticals, Mustafa Suleyman roots his risk framework in concrete, present-day threats. In The Coming Wave, he is clear: the danger is not that these risks might emerge—it’s that they already have. His call for containment is grounded in specific, observable developments across critical sectors where AI is already transforming—and destabilizing—traditional safeguards.

These domains reveal how quickly the lines between progress and peril can blur, especially when dual-use technologies are scaled without corresponding oversight. Suleyman identifies several areas where the absence of preemptive governance is already yielding systemic vulnerabilities.

1. Biological Engineering: Life Sciences at the Edge

AI is rapidly revolutionizing the field of synthetic biology, enabling breakthroughs that would have seemed impossible just a decade ago. Algorithms can now simulate protein folding (as seen in DeepMind’s AlphaFold), accelerate drug discovery, and generate synthetic DNA sequences at scale. These tools hold immense promise—from curing rare diseases to customizing treatments based on a person’s genome.

But with this progress comes a profound shift in risk dynamics.

Suleyman warns that the convergence of AI with biotech makes it easier than ever for small teams—or even lone actors—to engineer novel biological agents. AI systems can now assist in:

  • Designing synthetic viruses or bacteria
  • Optimizing viral payloads for transmission
  • Mapping genomic vulnerabilities across populations
  • Automating laboratory procedures through robotic pipetting and machine learning optimization

These capabilities drastically lower the barrier to entry for bioengineering. What once required years of institutional research and high-level containment facilities can increasingly be simulated—or even prototyped—with cloud access and commercial-grade labs.

This is especially concerning in a geopolitical context. Rogue states or extremist groups could, in theory, weaponize biological tools for asymmetric warfare, pandemic disruption, or political sabotage. Suleyman calls this “the democratization of catastrophic risk”—a paradigm in which the capacity for mass harm no longer resides solely with powerful nation-states.

Without enforceable global bio-AI safeguards, biosecurity could soon become the defining crisis of the post-AI era.

2. Mass Surveillance: The Infrastructure of Control

AI has supercharged the capabilities of surveillance states—turning what used to be slow, analog forms of monitoring into real-time, totalizing systems of behavioral analysis. In countries like China, AI is already integrated into mass surveillance infrastructures via:

  • Nationwide facial recognition networks
  • Predictive policing algorithms
  • Real-time emotion detection at borders or protests
  • Voiceprint databases and automated gait analysis
  • AI-driven loyalty scoring and digital ID integration

Suleyman argues that such systems do not merely monitor—they condition behavior. When surveillance is omnipresent, freedom becomes performative, and dissent becomes algorithmically discouraged before it can surface. What’s worse, these systems are often invisible to those outside the regime, creating closed feedback loops of repression and technical perfectionism.

He also emphasizes the stickiness of surveillance once deployed. Unlike software features that can be toggled on and off, surveillance infrastructure:

  • Is often deeply embedded in national security policy
  • Tied to vast databases of citizen behavior
  • Backed by private AI contractors with long-term service contracts

Dismantling such a system becomes politically and economically prohibitive—even if the regime shifts. That’s why Suleyman argues that early-stage containment is essential: once a population-scale AI surveillance system is active, it is functionally irreversible.

And while Western democracies may claim to operate under more ethical standards, similar tools are already in use:

  • AI video analytics by police departments
  • Predictive profiling in immigration and border control
  • Emotion detection tools in retail and workplace settings
  • Social media behavior scoring for marketing, credit, or hiring

These quieter versions of techno-surveillance still pose critical civil liberties questions, and Suleyman warns that mission creep—the gradual expansion of surveillance use cases—is already underway across the globe.

3. Autonomous Weapons: Delegating Lethality to Machines

Perhaps the most chilling frontier identified by Suleyman is the rise of autonomous weapons systems—AI-driven tools that can track, identify, and kill targets without human intervention.

The military-industrial adoption of AI is no longer theoretical. Around the world, leading powers are racing to deploy:

  • Swarming drone fleets with decentralized decision-making
  • AI-enabled missile systems that adjust in real-time to countermeasures
  • Robotic sentries capable of facial recognition and lethal engagement
  • Unmanned underwater and aerial vehicles for reconnaissance and offensive operations

The incentives are clear: faster reaction times, lower human risk, and increased strategic leverage. But Suleyman raises urgent alarms about the dehumanization of war and the lowering of thresholds for kinetic conflict.

When machines make kill decisions:

  • Accountability becomes ambiguous—who is responsible for an autonomous strike error?
  • Speed trumps diplomacy—decision-making windows shrink from hours to milliseconds
  • Escalation risks multiply—false positives or misidentifications could ignite regional conflicts

Unlike nuclear weapons, which are governed by treaties and mutual deterrence, there are currently no binding international agreements regulating the development or deployment of autonomous lethal systems. This regulatory vacuum, Suleyman argues, is one of the greatest policy failures of the AI era to date.

Furthermore, he emphasizes that these technologies will not remain exclusive to military superpowers. Commercial drones, open-source targeting software, and repurposed AI models could be used by insurgent groups or criminal networks—creating a new wave of asymmetric conflict that’s harder to trace and impossible to contain after the fact.

Together, these domains illustrate Suleyman’s central thesis: the risks of emerging AI are not abstract—they are operational, global, and accelerating. Without intentional, cross-border containment strategies, the spread of unchecked AI will not lead to utopia—it will lead to systemic instability across biological, civic, and military infrastructures.

The coming wave is not on the horizon. It is already breaking against our shores.

Suleyman’s Containment Framework: A Multi-Layered Approach

Mustafa Suleyman’s proposal for AI governance goes far beyond ethical pledges or reactive regulation. In The Coming Wave, he outlines a multi-tiered containment architecture—one designed not merely to mitigate harm after the fact, but to embed safety, accountability, and international cooperation into the technological lifecycle itself.

What makes his model distinct is its systems-thinking approach. Suleyman doesn’t believe a single actor—whether it’s a government, corporation, or NGO—can successfully manage the scale and velocity of AI proliferation. Instead, he proposes an ecosystem of controls spanning technical, legal, and diplomatic layers, working in concert across sectors and borders.

Below is a breakdown of the four key pillars of Suleyman’s containment model:

1. Model Auditing & Red-Teaming: Making Risk Visible

At the foundation of the containment strategy lies mandatory auditing for advanced AI systems. Suleyman proposes that all large-scale models—especially those intended for deployment in sensitive or high-impact contexts—must undergo rigorous, independent evaluation before release.

This includes a suite of safety protocols:

  • Bias audits to detect discriminatory outputs or skewed training data
  • Red-teaming exercises, where adversarial actors simulate attacks or attempts to manipulate the model
  • Dual-use risk assessments, examining whether a model could be easily repurposed for harmful tasks (e.g., generating disinformation, bioengineering malicious agents, or automating surveillance)

Importantly, Suleyman calls for these evaluations to be conducted by neutral third parties, not internal teams, to avoid conflicts of interest. In this vision, model testing becomes a public utility—not a corporate PR exercise.

He draws parallels to cybersecurity, where penetration testing and white-hat hacking are normalized—and argues AI must adopt a similar “trust but verify” culture before dangerous capabilities reach the public.

2. Licensing & Compliance Regimes: Regulating Compute Like a Commodity

Suleyman argues that governments must treat high-compute AI systems with the same regulatory gravity as aviation or finance. In both of those sectors, organizations are required to secure operational licenses, comply with safety standards, and accept routine inspections—and the same should apply to entities developing frontier models.

His proposed regime includes:

  • Licenses to train and deploy models above certain compute or capability thresholds (e.g., exceeding 10⁶⁰ FLOPs or containing >1T parameters)
  • Auditable documentation of training datasets, model intent, and alignment procedures
  • Penalties for noncompliance, including fines, shutdown orders, or bans from sensitive sectors
  • Transparency reporting, similar to financial disclosure forms, where labs report performance benchmarks, failure modes, and update cycles

By borrowing governance structures from domains like civil aviation (FAA) and financial regulation (SEC, FINRA), Suleyman envisions a world where AI developers are licensed professionals, not rogue engineers—and where violations can be penalized before they lead to public harm.

3. AI Safety Boards & Institutional Checks: Governance at the Speed of Code

To ensure real-time oversight, Suleyman proposes the creation of national and international AI safety boards. These bodies would serve as the functional equivalent of the FDA in healthcare or the IAEA in nuclear energy—empowered not only to monitor, but to intervene.

These boards would:

  • Certify high-risk models for deployment, based on standardized safety thresholds
  • Issue moratoriums or recalls if a model fails post-release safety benchmarks
  • Coordinate emergency response protocols for system failures, adversarial attacks, or runaway outputs
  • Convene multi-stakeholder panels including ethicists, engineers, sociologists, and legal scholars

These institutions would operate above the partisan fray, ideally governed by treaty or statute, and endowed with sufficient autonomy to act swiftly. In Suleyman’s framework, institutional checks are not bureaucratic delays—they’re circuit breakers, designed to slow down dangerous deployments in real time.

4. International Treaties on High-Risk AI: Global Governance for Existential Threats

Recognizing the geopolitical nature of AI proliferation, Suleyman places special emphasis on treaty-based containment of the most catastrophic risks. Just as nuclear powers were compelled to negotiate treaties to avoid mutual destruction, AI superpowers must align on red lines, transparency norms, and verification protocols.

Priority treaty domains include:

  • Autonomous weapons systems—banning or limiting fully autonomous lethal agents
  • Synthetic biology + AI convergence—requiring oversight of gene editing models that could produce biological threats
  • Superintelligence containment—establishing shared safety labs, escape detection measures, and joint fail-safe protocols
  • Model disclosure—requiring nations to report the training and deployment of models above specified thresholds

Suleyman advocates for a global AI Compact, with enforcement powers, public accountability, and shared standards for model evaluation. This compact would not only reduce arms-race dynamics but build the infrastructure for interoperability between jurisdictions—allowing ethical AI to flourish without regulatory fragmentation.

Suleyman’s containment framework is not a defense mechanism—it’s a governance architecture designed for resilience. It does not seek to halt technological progress, but to ensure that progress unfolds in a direction compatible with long-term human flourishing. As the stakes grow higher with each new frontier model, Suleyman’s call is clear: containment is not a luxury—it is an existential necessity.

The Innovation vs. Safety Debate

Mustafa Suleyman’s containment framework has been widely praised for its clarity, realism, and moral urgency—especially among policymakers, ethicists, and national security experts. But it has also sparked pushback from key players within the tech ecosystem, particularly from those who view regulation as a brake on innovation. The heart of the debate is this: can we govern AI without stifling the very creativity and experimentation that makes it valuable?

Startups, open-source advocates, and even some researchers raise several concerns about Suleyman’s model. They warn that over-regulating the frontier could unintentionally consolidate power, suppress beneficial innovation, and create new systemic risks of its own.

Key Critiques from Innovation Advocates

  1. Slowing Down Beneficial Use Cases
    Excessive regulation, especially around model licensing or international compliance, could delay life-saving innovations. For example:
    • AI-accelerated drug discovery and protein engineering may be held up by slow safety reviews
    • Climate models or precision agriculture systems might face export restrictions due to dual-use concerns
    • Educational or assistive AIs could be throttled by “one-size-fits-all” policy templates meant for more dangerous models
  2. Entrenching Incumbents
    Large tech firms—like Suleyman’s own employer, Microsoft—have the legal teams, compliance infrastructure, and lobbying power to navigate complex regulatory environments. Smaller AI startups may:
    • Be unable to afford mandatory audits or red-teaming
    • Struggle to interpret ambiguous safety thresholds
    • Get boxed out of public procurement pipelines due to bureaucratic hurdles
  3. Critics argue this could lead to market consolidation, where only the largest companies can afford to build safely—and thus control the future of AI by regulatory default.
  4. Incentivizing Regulatory Arbitrage
    If containment frameworks are not globally harmonized, companies may simply relocate development to jurisdictions with looser rules—just as some firms in the past have done in cryptocurrency, data hosting, or biotech.
    This could result in:
    • “Shadow labs” developing high-risk models outside public scrutiny
    • AI mercenary markets, where models are trained for sale in less regulated economies
    • A global race to the bottom, as nations compete to attract AI investment by weakening safeguards

These arguments don’t come from fringe corners—they are echoed by respected researchers, open-source leaders, and democratic technologists who want AI to remain transparent, collaborative, and equitable.

Suleyman’s Rebuttal: Regulation as Precondition, Not Opponent

Suleyman acknowledges these concerns—but offers a fundamentally different framing. In his view, unregulated proliferation is a far greater threat to innovation than responsible containment. Left unchecked, runaway AI systems could trigger:

  • Public backlash and blanket bans after high-profile failures
  • Litigation waves that chill commercial deployment
  • Geopolitical escalations, prompting hardline global moratoriums
  • Loss of public trust, undermining adoption across critical sectors

His position is that a fragile innovation ecosystem is not a free one—it’s a vulnerable one. If people don’t trust AI, or if governments are forced to react to AI-induced crises, the end result will be worse for innovators than smart, anticipatory regulation.

Rather than arguing for rigidity, Suleyman calls for adaptive regulation—governance that evolves with model capabilities, but still holds firm to non-negotiable safety thresholds. He advocates for:

  • Risk-tiered governance: Different rules for low-risk versus high-risk models
  • Regulatory sandboxes: Safe testing zones for startups to experiment within controlled environments
  • Auditing subsidies: Public funding or shared services to help smaller firms meet compliance
  • Open standards bodies: Community-led efforts to define ethical AI baselines across jurisdictions

He sees containment not as a blunt instrument, but as a feedback loop—a mechanism for co-evolving policy and technology in tandem. In this framing, regulation is not the enemy of innovation—it is the infrastructure that makes safe, scalable innovation possible.

The debate between innovation and restraint isn’t new—but in AI, it has higher stakes than ever before. The challenge isn’t choosing between progress and safety. It’s building the institutional maturity to do both at once.

As Suleyman writes, “Our goal must be to ensure that the benefits of the coming wave are distributed widely—without letting its most dangerous elements tear society apart.”

Final Call: Containment as Collaboration

At its heart, Suleyman’s thesis is a call to action for multi-sector collaboration. Containment cannot be outsourced to any one entity. It requires:

  • Technologists to build safeguards into the models
  • Governments to craft enforceable rules and protocols
  • Civil society to demand transparency, equity, and accountability
  • International coalitions to align enforcement and share intelligence

Suleyman offers a vision where containment is not censorship, but stewardship—a civic infrastructure to ensure AI enhances the human future rather than endangers it.

In The Coming Wave, Suleyman delivers more than a warning. He delivers a blueprint. The question now is not whether we can contain the wave—but whether we will choose to.

Works Cited

Bhaskar, M., & Suleyman, M. (2023). The Coming Wave: Technology, Power, and the Twenty-First Century’s Greatest Dilemma. Crown Publishing Group.  

Gates, B. (2024, December 3). My favorite book on AI. GatesNotes.  

Inflection.ai. (2023, September 8). Inflection.ai CEO Mustafa Suleyman explains how to catch a ride on the ‘coming wave’ of technology. The Associated Press.  

Lichfield, G., & Goode, L. (2023, August 16). The world isn’t ready for the next decade of AI. Wired.  

Wikipedia contributors. (2025, June). Mustafa Suleyman. Wikipedia

Wikipedia contributors. (2025, June). Michael Bhaskar. Wikipedia

Klover.ai. (n.d.). Mustafa Suleyman’s role in government AI strategy. Klover.ai. https://www.klover.ai/mustafa-suleymans-role-in-government-ai-strategy/

Klover.ai. (n.d.). Mustafa Suleyman’s influence on applied AI ethics. Klover.ai. https://www.klover.ai/mustafa-suleymans-influence-on-applied-ai-ethics/

Klover.ai. (n.d.). Mustafa Suleyman. Klover.ai. https://www.klover.ai/mustafa-suleyman/

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Ready to start making better decisions?

drop us a line and find out how

Klover.ai delivers enterprise-grade decision intelligence through AGD™—a human-centric, multi-agent AI system designed to power smarter, faster, and more ethical decision-making.

Contact Us

Follow our newsletter

    Decision Intelligence
    AGD™
    AI Decision Making
    Enterprise AI
    Augmented Human Decisions
    AGD™ vs. AGI

    © 2025 Klover.ai All Rights Reserved.

    Cart (0 items)

    Create your account