Anthropic’s “Safe AI” Narrative Is Falling Apart | Warning Shots #28
Sun Feb 01 2026
What happens when the people building the most powerful AI systems in the world admit the risks, then keep accelerating anyway?In this episode of Warning Shot, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to break down Dario Amodei’s latest essay, The Adolescent Phase of AI, and why its calm, reassuring tone may be far more dangerous than open alarmism. They unpack how “safe AI” narratives can dull public urgency even as capabilities race ahead and control remains elusive.The conversation expands to the Doomsday Clock moving closer to midnight, with AI now explicitly named as an extinction-amplifying risk, and the unsettling news that AI systems like Grok are beginning to outperform humans at predicting real-world outcomes. From intelligence explosion dynamics and bioweapons risk to unemployment, prediction markets, and the myth of “surgical” AI safety, this episode asks a hard question: What does responsibility even mean when no one is truly in control?This is a blunt, unsparing conversation about power, incentives, and why the absence of “adults in the room” may be the defining danger of the AI era.
🔎 They explore:
* Why “responsible acceleration” may be incoherent
* How AI amplifies nuclear, biological, and geopolitical risk
* Why prediction superiority is a critical AGI warning sign
* The psychological danger of trusted elites projecting confidence
* Why AI safety narratives can suppress public urgency
* What it means to build systems no one can truly stop
As the people building AI admit the risks and keep going anyway, this episode asks the question no one wants to answer: what does “responsibility” mean when there’s no stop button?
If it’s Sunday, it’s Warning Shots.
📺 Watch more on The AI Risk Network
🔗Follow our hosts:
→ Liron Shapira -Doom Debates
→ Michael - @lethal-intelligence
🗨️ Join the Conversation
Do calm, reassuring AI narratives reduce public panic—or dangerously delay action? Let us know what you think in the comments.
Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
More
What happens when the people building the most powerful AI systems in the world admit the risks, then keep accelerating anyway?In this episode of Warning Shot, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to break down Dario Amodei’s latest essay, The Adolescent Phase of AI, and why its calm, reassuring tone may be far more dangerous than open alarmism. They unpack how “safe AI” narratives can dull public urgency even as capabilities race ahead and control remains elusive.The conversation expands to the Doomsday Clock moving closer to midnight, with AI now explicitly named as an extinction-amplifying risk, and the unsettling news that AI systems like Grok are beginning to outperform humans at predicting real-world outcomes. From intelligence explosion dynamics and bioweapons risk to unemployment, prediction markets, and the myth of “surgical” AI safety, this episode asks a hard question: What does responsibility even mean when no one is truly in control?This is a blunt, unsparing conversation about power, incentives, and why the absence of “adults in the room” may be the defining danger of the AI era. 🔎 They explore: * Why “responsible acceleration” may be incoherent * How AI amplifies nuclear, biological, and geopolitical risk * Why prediction superiority is a critical AGI warning sign * The psychological danger of trusted elites projecting confidence * Why AI safety narratives can suppress public urgency * What it means to build systems no one can truly stop As the people building AI admit the risks and keep going anyway, this episode asks the question no one wants to answer: what does “responsibility” mean when there’s no stop button? If it’s Sunday, it’s Warning Shots. 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira -Doom Debates → Michael - @lethal-intelligence 🗨️ Join the Conversation Do calm, reassuring AI narratives reduce public panic—or dangerously delay action? Let us know what you think in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe