PodcastsRank #10878
Artwork for For Humanity: An AI Safety Podcast

For Humanity: An AI Safety Podcast

TechnologyPodcastsSociety & CultureENunited-statesDaily or near-daily
4.4 / 5
For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. <a href="https://theairisknetwork.substack.com/s/for-humanity-an-ai-risk-podcast?utm_medium=podcast">theairisknetwork.substack.com</a>
Top 21.8% by pitch volume (Rank #10878 of 50,000)Data updated Feb 10, 2026

Key Facts

Publishes
Daily or near-daily
Episodes
117
Founded
N/A
Category
Technology
Number of listeners
Private
Hidden on public pages

Listen to this Podcast

Pitch this podcast
Get the guest pitch kit.
Book a quick demo to unlock the outreach details you actually need before you hit send.
  • Verified contact + outreach fields
  • Exact listener estimates (not just bands)
  • Reply rate + response timing signals
10 minutes. Friendly walkthrough. No pressure.
Book a demo
Public snapshot
Audience: Under 4K / month
Canonical: https://podpitch.com/podcasts/for-humanity-an-ai-safety-podcast
Cadence: Active weekly
Reply rate: Under 2%

Latest Episodes

Back to top

Can't We Just Pause AI? | For Humanity #78

Sat Jan 31 2026

Listen

What happens when AI risk stops being theoretical—and starts showing up in people’s jobs, families, and communities?In this episode of For Humanity, John sits down with Maxime Fournes, the new CEO of PauseAI Global, for a wide-ranging and deeply human conversation about burnout, strategy, and what it will actually take to slow down runaway AI development. From meditation retreats and personal sustainability to mass job displacement, data center backlash, and political capture, Maxime lays out a clear-eyed view of where the AI safety movement stands in 2026—and where it must go next.They explore why regulation alone won’t save us, how near-term harms like job loss, youth mental health crises, and community disruption may be the most powerful on-ramps to existential risk awareness, and why movement-building—not just policy papers—will decide our future. This conversation reframes AI safety as a struggle over power, narratives, and timing, and asks what it would take to hit a true global tipping point before it’s too late. Together, they explore: * Why AI safety must address real, present-day harms, not just abstract futures * How burnout and mental resilience shape long-term movement success * Why job displacement, youth harm, and data centers are political leverage points * The limits of regulation without enforcement and public pressure * How tipping points in public opinion actually form * Why protests still matter—even when they’re small * What it will take to build a global, durable AI safety movement 📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

More

What happens when AI risk stops being theoretical—and starts showing up in people’s jobs, families, and communities?In this episode of For Humanity, John sits down with Maxime Fournes, the new CEO of PauseAI Global, for a wide-ranging and deeply human conversation about burnout, strategy, and what it will actually take to slow down runaway AI development. From meditation retreats and personal sustainability to mass job displacement, data center backlash, and political capture, Maxime lays out a clear-eyed view of where the AI safety movement stands in 2026—and where it must go next.They explore why regulation alone won’t save us, how near-term harms like job loss, youth mental health crises, and community disruption may be the most powerful on-ramps to existential risk awareness, and why movement-building—not just policy papers—will decide our future. This conversation reframes AI safety as a struggle over power, narratives, and timing, and asks what it would take to hit a true global tipping point before it’s too late. Together, they explore: * Why AI safety must address real, present-day harms, not just abstract futures * How burnout and mental resilience shape long-term movement success * Why job displacement, youth harm, and data centers are political leverage points * The limits of regulation without enforcement and public pressure * How tipping points in public opinion actually form * Why protests still matter—even when they’re small * What it will take to build a global, durable AI safety movement 📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

Key Metrics

Back to top
Pitches sent
29
From PodPitch users
Rank
#10878
Top 21.8% by pitch volume (Rank #10878 of 50,000)
Average rating
4.4
Ratings count may be unavailable
Reviews
4
Written reviews (when available)
Publish cadence
Daily or near-daily
Active weekly
Episode count
117
Data updated
Feb 10, 2026
Social followers
642

Public Snapshot

Back to top
Country
United States
Language
English
Language (ISO)
Release cadence
Daily or near-daily
Latest episode date
Sat Jan 31 2026

Audience & Outreach (Public)

Back to top
Audience range
Under 4K / month
Public band
Reply rate band
Under 2%
Public band
Response time band
3–6 days
Public band
Replies received
1–5
Public band

Public ranges are rounded for privacy. Unlock the full report for exact values.

Presence & Signals

Back to top
Social followers
642
Contact available
Yes
Masked on public pages
Sponsors detected
Private
Hidden on public pages
Guest format
Private
Hidden on public pages

Social links

No public profiles listed.

Demo to Unlock Full Outreach Intelligence

We publicly share enough context for discovery. For actionable outreach data, unlock the private blocks below.

Audience & Growth
Demo to unlock
Monthly listeners49,360
Reply rate18.2%
Avg response4.1 days
See audience size and growth. Demo to unlock.
Contact preview
a***@hidden
Get verified host contact details. Demo to unlock.
Sponsor signals
Demo to unlock
Sponsor mentionsLikely
Ad-read historyAvailable
View sponsorship signals and ad read history. Demo to unlock.
Book a demo

How To Pitch For Humanity: An AI Safety Podcast

Back to top

Want to get booked on podcasts like this?

Become the guest your future customers already trust.

PodPitch helps you find shows, draft personalized pitches, and hit send faster. We share enough public context for discovery; for actionable outreach data, unlock the private blocks.

  • Identify shows that match your audience and offer.
  • Write pitches in your voice (nothing sends without you).
  • Move from “maybe later” to booked interviews faster.
  • Unlock deeper outreach intelligence with a quick demo.

This show is Rank #10878 by pitch volume, with 29 pitches sent by PodPitch users.

Book a demoBrowse more shows10 minutes. Friendly walkthrough. No pressure.
4.4 / 5
RatingsN/A
Written reviews4

We summarize public review counts here; full review text aggregation is not shown on PodPitch yet.

Frequently Asked Questions About For Humanity: An AI Safety Podcast

Back to top

What is For Humanity: An AI Safety Podcast about?

For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. <a href="https://theairisknetwork.substack.com/s/for-humanity-an-ai-risk-podcast?utm_medium=podcast">theairisknetwork.substack.com</a>

How often does For Humanity: An AI Safety Podcast publish new episodes?

Daily or near-daily

How many listeners does For Humanity: An AI Safety Podcast get?

PodPitch shows a public audience band (like "Under 4K / month"). Book a demo to unlock exact audience estimates and how we calculate them.

How can I pitch For Humanity: An AI Safety Podcast?

Use PodPitch to access verified outreach details and pitch recommendations for For Humanity: An AI Safety Podcast. Start at https://podpitch.com/try/1.

Which podcasts are similar to For Humanity: An AI Safety Podcast?

This page includes internal links to similar podcasts. You can also browse the full directory at https://podpitch.com/podcasts.

How do I contact For Humanity: An AI Safety Podcast?

Public pages only show a masked contact preview. Book a demo to unlock verified email and outreach fields.

Quick favor for your future self: want podcast bookings without the extra mental load? PodPitch helps you find shows, draft personalized pitches, and hit send faster.