Can't We Just Pause AI? | For Humanity #78
Sat Jan 31 2026
What happens when AI risk stops being theoretical—and starts showing up in people’s jobs, families, and communities?In this episode of For Humanity, John sits down with Maxime Fournes, the new CEO of PauseAI Global, for a wide-ranging and deeply human conversation about burnout, strategy, and what it will actually take to slow down runaway AI development. From meditation retreats and personal sustainability to mass job displacement, data center backlash, and political capture, Maxime lays out a clear-eyed view of where the AI safety movement stands in 2026—and where it must go next.They explore why regulation alone won’t save us, how near-term harms like job loss, youth mental health crises, and community disruption may be the most powerful on-ramps to existential risk awareness, and why movement-building—not just policy papers—will decide our future. This conversation reframes AI safety as a struggle over power, narratives, and timing, and asks what it would take to hit a true global tipping point before it’s too late.
Together, they explore:
* Why AI safety must address real, present-day harms, not just abstract futures
* How burnout and mental resilience shape long-term movement success
* Why job displacement, youth harm, and data centers are political leverage points
* The limits of regulation without enforcement and public pressure
* How tipping points in public opinion actually form
* Why protests still matter—even when they’re small
* What it will take to build a global, durable AI safety movement
📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.
Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
More
What happens when AI risk stops being theoretical—and starts showing up in people’s jobs, families, and communities?In this episode of For Humanity, John sits down with Maxime Fournes, the new CEO of PauseAI Global, for a wide-ranging and deeply human conversation about burnout, strategy, and what it will actually take to slow down runaway AI development. From meditation retreats and personal sustainability to mass job displacement, data center backlash, and political capture, Maxime lays out a clear-eyed view of where the AI safety movement stands in 2026—and where it must go next.They explore why regulation alone won’t save us, how near-term harms like job loss, youth mental health crises, and community disruption may be the most powerful on-ramps to existential risk awareness, and why movement-building—not just policy papers—will decide our future. This conversation reframes AI safety as a struggle over power, narratives, and timing, and asks what it would take to hit a true global tipping point before it’s too late. Together, they explore: * Why AI safety must address real, present-day harms, not just abstract futures * How burnout and mental resilience shape long-term movement success * Why job displacement, youth harm, and data centers are political leverage points * The limits of regulation without enforcement and public pressure * How tipping points in public opinion actually form * Why protests still matter—even when they’re small * What it will take to build a global, durable AI safety movement 📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe