Scaling AI Safety Through Mentorship w/ Dr. Ryan Kidd
Mon Feb 02 2026
What does it actually take to build a successful AI safety organization? I'm joined by Dr. Ryan Kidd, who has co-led MATS from a small pilot program to one of the field's premier talent pipelines. In this episode, he reveals the low-hanging fruit in AI safety field-building that most people are missing: the amplifier archetype.
I pushed Ryan on some hard questions, from balancing funder priorities and research independence, to building a robust selection process for both mentors and participants. Whether you're considering a career pivot into AI safety or already working in the field, this conversation offers practical advice on how to actually make an impact.
Chapters
(00:00) - - Intro
(08:16) - - Building MATS Post-FTX & Summer of Love
(13:09) - - Balancing Funder Priorities and Research Independence
(19:44) - - The MATS Selection Process
(33:15) - - Talent Archetypes in AI Safety
(50:22) - - Comparative Advantage and Career Capital in AI Safety
(01:04:35) - - Building the AI Safety Ecosystem
(01:15:28) - - What Makes a Great AI Safety Amplifier
(01:21:44) - - Lightning Round Questions
(01:30:30) - - Final Thoughts & Outro
LinksMATSRyan's Writing
LessWrong post - Talent needs of technical AI safety teamsLessWrong post - AI safety undervalues foundersLessWrong comment - Comment permalink with 2025 MATS program detailsLessWrong post - Talk: AI Safety Fieldbuilding at MATSLessWrong post - MATS Mentor SelectionLessWrong post - Why I funded PIBBSSEA Forum post - How MATS addresses mass movement building concernsFTX Funding of AI Safety
LessWrong blogpost - An Overview of the AI Safety Funding SituationFortune article - Why Sam Bankman-Fried’s FTX debacle is roiling A.I. researchNY Times article - FTX probes $6.5M in payments to AI safety group amid clawback crusadeCointelegraph article - FTX probes $6.5M in payments to AI safety group amid clawback crusadeFTX Future Fund article - Future Fund June 2022 Update (archive)Tracxn page - Anthropic Funding and InvestorsTraining & Support Programs
Catalyze ImpactSeldon LabSPARBlueDot ImpactYCombinatorPivotalAthenaAstra FellowshipHorizon FellowshipBASE FellowshipLASR LabsEntrepeneur FirstFunding Organizations
Coefficient Giving (previously Open Philanthropy)LTFFLongview PhilanthropyRenaissance PhilanthropyCoworking Spaces
LISAMoxLighthavenFAR LabsConstellationColliderNET OfficeBAISHResearch Organizations & Startups
Atla AIApollo ResearchTimaeusRAND CASTCHAIOther Sources
AXRP website - The AI X-risk Research PodcastLessWrong blogpost - Shard Theory: An Overview
More
What does it actually take to build a successful AI safety organization? I'm joined by Dr. Ryan Kidd, who has co-led MATS from a small pilot program to one of the field's premier talent pipelines. In this episode, he reveals the low-hanging fruit in AI safety field-building that most people are missing: the amplifier archetype. I pushed Ryan on some hard questions, from balancing funder priorities and research independence, to building a robust selection process for both mentors and participants. Whether you're considering a career pivot into AI safety or already working in the field, this conversation offers practical advice on how to actually make an impact. Chapters (00:00) - - Intro (08:16) - - Building MATS Post-FTX & Summer of Love (13:09) - - Balancing Funder Priorities and Research Independence (19:44) - - The MATS Selection Process (33:15) - - Talent Archetypes in AI Safety (50:22) - - Comparative Advantage and Career Capital in AI Safety (01:04:35) - - Building the AI Safety Ecosystem (01:15:28) - - What Makes a Great AI Safety Amplifier (01:21:44) - - Lightning Round Questions (01:30:30) - - Final Thoughts & Outro LinksMATSRyan's Writing LessWrong post - Talent needs of technical AI safety teamsLessWrong post - AI safety undervalues foundersLessWrong comment - Comment permalink with 2025 MATS program detailsLessWrong post - Talk: AI Safety Fieldbuilding at MATSLessWrong post - MATS Mentor SelectionLessWrong post - Why I funded PIBBSSEA Forum post - How MATS addresses mass movement building concernsFTX Funding of AI Safety LessWrong blogpost - An Overview of the AI Safety Funding SituationFortune article - Why Sam Bankman-Fried’s FTX debacle is roiling A.I. researchNY Times article - FTX probes $6.5M in payments to AI safety group amid clawback crusadeCointelegraph article - FTX probes $6.5M in payments to AI safety group amid clawback crusadeFTX Future Fund article - Future Fund June 2022 Update (archive)Tracxn page - Anthropic Funding and InvestorsTraining & Support Programs Catalyze ImpactSeldon LabSPARBlueDot ImpactYCombinatorPivotalAthenaAstra FellowshipHorizon FellowshipBASE FellowshipLASR LabsEntrepeneur FirstFunding Organizations Coefficient Giving (previously Open Philanthropy)LTFFLongview PhilanthropyRenaissance PhilanthropyCoworking Spaces LISAMoxLighthavenFAR LabsConstellationColliderNET OfficeBAISHResearch Organizations & Startups Atla AIApollo ResearchTimaeusRAND CASTCHAIOther Sources AXRP website - The AI X-risk Research PodcastLessWrong blogpost - Shard Theory: An Overview