PodcastsRank #24259
Artwork for Into AI Safety

Into AI Safety

TechnologyPodcastsScienceMathematicsEN-USunited-states
Rating unavailable
The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI" For better formatted show notes, additional resources, and more, go to https://kairos.fm/intoaisafety/
Top 48.5% by pitch volume (Rank #24259 of 50,000)Data updated Feb 10, 2026

Key Facts

Publishes
N/A
Episodes
27
Founded
N/A
Category
Technology
Number of listeners
Private
Hidden on public pages

Listen to this Podcast

Pitch this podcast
Get the guest pitch kit.
Book a quick demo to unlock the outreach details you actually need before you hit send.
  • Verified contact + outreach fields
  • Exact listener estimates (not just bands)
  • Reply rate + response timing signals
10 minutes. Friendly walkthrough. No pressure.
Book a demo
Public snapshot
Audience: Under 4K / month
Canonical: https://podpitch.com/podcasts/into-ai-safety
Reply rate: 35%+

Latest Episodes

Back to top

Scaling AI Safety Through Mentorship w/ Dr. Ryan Kidd

Mon Feb 02 2026

Listen

What does it actually take to build a successful AI safety organization? I'm joined by Dr. Ryan Kidd, who has co-led MATS from a small pilot program to one of the field's premier talent pipelines. In this episode, he reveals the low-hanging fruit in AI safety field-building that most people are missing: the amplifier archetype. I pushed Ryan on some hard questions, from balancing funder priorities and research independence, to building a robust selection process for both mentors and participants. Whether you're considering a career pivot into AI safety or already working in the field, this conversation offers practical advice on how to actually make an impact. Chapters (00:00) - - Intro (08:16) - - Building MATS Post-FTX & Summer of Love (13:09) - - Balancing Funder Priorities and Research Independence (19:44) - - The MATS Selection Process (33:15) - - Talent Archetypes in AI Safety (50:22) - - Comparative Advantage and Career Capital in AI Safety (01:04:35) - - Building the AI Safety Ecosystem (01:15:28) - - What Makes a Great AI Safety Amplifier (01:21:44) - - Lightning Round Questions (01:30:30) - - Final Thoughts & Outro LinksMATSRyan's Writing LessWrong post - Talent needs of technical AI safety teamsLessWrong post - AI safety undervalues foundersLessWrong comment - Comment permalink with 2025 MATS program detailsLessWrong post - Talk: AI Safety Fieldbuilding at MATSLessWrong post - MATS Mentor SelectionLessWrong post - Why I funded PIBBSSEA Forum post - How MATS addresses mass movement building concernsFTX Funding of AI Safety LessWrong blogpost - An Overview of the AI Safety Funding SituationFortune article - Why Sam Bankman-Fried’s FTX debacle is roiling A.I. researchNY Times article - FTX probes $6.5M in payments to AI safety group amid clawback crusadeCointelegraph article - FTX probes $6.5M in payments to AI safety group amid clawback crusadeFTX Future Fund article - Future Fund June 2022 Update (archive)Tracxn page - Anthropic Funding and InvestorsTraining & Support Programs Catalyze ImpactSeldon LabSPARBlueDot ImpactYCombinatorPivotalAthenaAstra FellowshipHorizon FellowshipBASE FellowshipLASR LabsEntrepeneur FirstFunding Organizations Coefficient Giving (previously Open Philanthropy)LTFFLongview PhilanthropyRenaissance PhilanthropyCoworking Spaces LISAMoxLighthavenFAR LabsConstellationColliderNET OfficeBAISHResearch Organizations & Startups Atla AIApollo ResearchTimaeusRAND CASTCHAIOther Sources AXRP website - The AI X-risk Research PodcastLessWrong blogpost - Shard Theory: An Overview

More

What does it actually take to build a successful AI safety organization? I'm joined by Dr. Ryan Kidd, who has co-led MATS from a small pilot program to one of the field's premier talent pipelines. In this episode, he reveals the low-hanging fruit in AI safety field-building that most people are missing: the amplifier archetype. I pushed Ryan on some hard questions, from balancing funder priorities and research independence, to building a robust selection process for both mentors and participants. Whether you're considering a career pivot into AI safety or already working in the field, this conversation offers practical advice on how to actually make an impact. Chapters (00:00) - - Intro (08:16) - - Building MATS Post-FTX & Summer of Love (13:09) - - Balancing Funder Priorities and Research Independence (19:44) - - The MATS Selection Process (33:15) - - Talent Archetypes in AI Safety (50:22) - - Comparative Advantage and Career Capital in AI Safety (01:04:35) - - Building the AI Safety Ecosystem (01:15:28) - - What Makes a Great AI Safety Amplifier (01:21:44) - - Lightning Round Questions (01:30:30) - - Final Thoughts & Outro LinksMATSRyan's Writing LessWrong post - Talent needs of technical AI safety teamsLessWrong post - AI safety undervalues foundersLessWrong comment - Comment permalink with 2025 MATS program detailsLessWrong post - Talk: AI Safety Fieldbuilding at MATSLessWrong post - MATS Mentor SelectionLessWrong post - Why I funded PIBBSSEA Forum post - How MATS addresses mass movement building concernsFTX Funding of AI Safety LessWrong blogpost - An Overview of the AI Safety Funding SituationFortune article - Why Sam Bankman-Fried’s FTX debacle is roiling A.I. researchNY Times article - FTX probes $6.5M in payments to AI safety group amid clawback crusadeCointelegraph article - FTX probes $6.5M in payments to AI safety group amid clawback crusadeFTX Future Fund article - Future Fund June 2022 Update (archive)Tracxn page - Anthropic Funding and InvestorsTraining & Support Programs Catalyze ImpactSeldon LabSPARBlueDot ImpactYCombinatorPivotalAthenaAstra FellowshipHorizon FellowshipBASE FellowshipLASR LabsEntrepeneur FirstFunding Organizations Coefficient Giving (previously Open Philanthropy)LTFFLongview PhilanthropyRenaissance PhilanthropyCoworking Spaces LISAMoxLighthavenFAR LabsConstellationColliderNET OfficeBAISHResearch Organizations & Startups Atla AIApollo ResearchTimaeusRAND CASTCHAIOther Sources AXRP website - The AI X-risk Research PodcastLessWrong blogpost - Shard Theory: An Overview

Key Metrics

Back to top
Pitches sent
14
From PodPitch users
Rank
#24259
Top 48.5% by pitch volume (Rank #24259 of 50,000)
Average rating
N/A
Ratings count may be unavailable
Reviews
N/A
Written reviews (when available)
Publish cadence
N/A
Episode count
27
Data updated
Feb 10, 2026
Social followers
N/A

Public Snapshot

Back to top
Country
United States
Language
EN-US
Language (ISO)
Release cadence
N/A
Latest episode date
Mon Feb 02 2026

Audience & Outreach (Public)

Back to top
Audience range
Under 4K / month
Public band
Reply rate band
35%+
Public band
Response time band
3–6 days
Public band
Replies received
1–5
Public band

Public ranges are rounded for privacy. Unlock the full report for exact values.

Presence & Signals

Back to top
Social followers
N/A
Contact available
Yes
Masked on public pages
Sponsors detected
Private
Hidden on public pages
Guest format
Private
Hidden on public pages

Social links

No public profiles listed.

Demo to Unlock Full Outreach Intelligence

We publicly share enough context for discovery. For actionable outreach data, unlock the private blocks below.

Audience & Growth
Demo to unlock
Monthly listeners49,360
Reply rate18.2%
Avg response4.1 days
See audience size and growth. Demo to unlock.
Contact preview
l***@hidden
Get verified host contact details. Demo to unlock.
Sponsor signals
Demo to unlock
Sponsor mentionsLikely
Ad-read historyAvailable
View sponsorship signals and ad read history. Demo to unlock.
Book a demo

How To Pitch Into AI Safety

Back to top

Want to get booked on podcasts like this?

Become the guest your future customers already trust.

PodPitch helps you find shows, draft personalized pitches, and hit send faster. We share enough public context for discovery; for actionable outreach data, unlock the private blocks.

  • Identify shows that match your audience and offer.
  • Write pitches in your voice (nothing sends without you).
  • Move from “maybe later” to booked interviews faster.
  • Unlock deeper outreach intelligence with a quick demo.

This show is Rank #24259 by pitch volume, with 14 pitches sent by PodPitch users.

Book a demoBrowse more shows10 minutes. Friendly walkthrough. No pressure.
Rating unavailable
RatingsN/A
Written reviewsN/A

We summarize public review counts here; full review text aggregation is not shown on PodPitch yet.

Frequently Asked Questions About Into AI Safety

Back to top

What is Into AI Safety about?

The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI" For better formatted show notes, additional resources, and more, go to https://kairos.fm/intoaisafety/

How often does Into AI Safety publish new episodes?

Into AI Safety publishes on a variable schedule.

How many listeners does Into AI Safety get?

PodPitch shows a public audience band (like "Under 4K / month"). Book a demo to unlock exact audience estimates and how we calculate them.

How can I pitch Into AI Safety?

Use PodPitch to access verified outreach details and pitch recommendations for Into AI Safety. Start at https://podpitch.com/try/1.

Which podcasts are similar to Into AI Safety?

This page includes internal links to similar podcasts. You can also browse the full directory at https://podpitch.com/podcasts.

How do I contact Into AI Safety?

Public pages only show a masked contact preview. Book a demo to unlock verified email and outreach fields.

Quick favor for your future self: want podcast bookings without the extra mental load? PodPitch helps you find shows, draft personalized pitches, and hit send faster.