PodcastsRank #2388
Artwork for Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

TechnologyPodcastsENunited-statesBi-weekly
4.6 / 5
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Top 4.8% by pitch volume (Rank #2388 of 50,000)Data updated Feb 10, 2026

Key Facts

Publishes
Bi-weekly
Episodes
246
Founded
N/A
Category
Technology
Number of listeners
Private
Hidden on public pages

Listen to this Podcast

Pitch this podcast
Get the guest pitch kit.
Book a quick demo to unlock the outreach details you actually need before you hit send.
  • Verified contact + outreach fields
  • Exact listener estimates (not just bands)
  • Reply rate + response timing signals
10 minutes. Friendly walkthrough. No pressure.
Book a demo
Public snapshot
Audience: 200K–400K / month
Canonical: https://podpitch.com/podcasts/machine-learning-street-talk-mlst
Cadence: Active monthly
Reply rate: Under 2%

Latest Episodes

Back to top

VAEs Are Energy-Based Models? [Dr. Jeff Beck]

Sun Jan 25 2026

Listen

What makes something truly *intelligent?* Is a rock an agent? Could a perfect simulation of your brain actually *be* you? In this fascinating conversation, Dr. Jeff Beck takes us on a journey through the philosophical and technical foundations of agency, intelligence, and the future of AI. Jeff doesn't hold back on the big questions. He argues that from a purely mathematical perspective, there's no structural difference between an agent and a rock – both execute policies that map inputs to outputs. The real distinction lies in *sophistication* – how complex are the internal computations? Does the system engage in planning and counterfactual reasoning, or is it just a lookup table that happens to give the right answers? *Key topics explored in this conversation:* *The Black Box Problem of Agency* – How can we tell if something is truly planning versus just executing a pre-computed response? Jeff explains why this question is nearly impossible to answer from the outside, and why the best we can do is ask which model gives us the simplest explanation. *Energy-Based Models Explained* – A masterclass on how EBMs differ from standard neural networks. The key insight: traditional networks only optimize weights, while energy-based models optimize *both* weights and internal states – a subtle but profound distinction that connects to Bayesian inference. *Why Your Brain Might Have Evolved from Your Nose* – One of the most surprising moments in the conversation. Jeff proposes that the complex, non-smooth nature of olfactory space may have driven the evolution of our associative cortex and planning abilities. *The JEPA Revolution* – A deep dive into Yann LeCun's Joint Embedding Prediction Architecture and why learning in latent space (rather than predicting every pixel) might be the key to more robust AI representations. *AI Safety Without Skynet Fears* – Jeff takes a refreshingly grounded stance on AI risk. He's less worried about rogue superintelligences and more concerned about humans becoming "reward function selectors" – couch potatoes who just approve or reject AI outputs. His proposed solution? Use inverse reinforcement learning to derive AI goals from observed human behavior, then make *small* perturbations rather than naive commands like "end world hunger." Whether you're interested in the philosophy of mind, the technical details of modern machine learning, or just want to understand what makes intelligence *tick,* this conversation delivers insights you won't find anywhere else. --- TIMESTAMPS: 00:00:00 Geometric Deep Learning & Physical Symmetries 00:00:56 Defining Agency: From Rocks to Planning 00:05:25 The Black Box Problem & Counterfactuals 00:08:45 Simulated Agency vs. Physical Reality 00:12:55 Energy-Based Models & Test-Time Training 00:17:30 Bayesian Inference & Free Energy 00:20:07 JEPA, Latent Space, & Non-Contrastive Learning 00:27:07 Evolution of Intelligence & Modular Brains 00:34:00 Scientific Discovery & Automated Experimentation 00:38:04 AI Safety, Enfeeblement & The Future of Work --- REFERENCES: Concept: [00:00:58] Free Energy Principle (FEP) https://en.wikipedia.org/wiki/Free_energy_principle [00:06:00] Monte Carlo Tree Search https://en.wikipedia.org/wiki/Monte_Carlo_tree_search Book: [00:09:00] The Intentional Stance https://mitpress.mit.edu/9780262540537/the-intentional-stance/ Paper: [00:13:00] A Tutorial on Energy-Based Learning (LeCun 2006) http://yann.lecun.com/exdb/publis/pdf/lecun-06.pdf [00:15:00] Auto-Encoding Variational Bayes (VAE) https://arxiv.org/abs/1312.6114 [00:20:15] JEPA (Joint Embedding Prediction Architecture) https://openreview.net/forum?id=BZ5a1r-kVsf [00:22:30] The Wake-Sleep Algorithm https://www.cs.toronto.edu/~hinton/absps/ws.pdf --- RESCRIPT: https://app.rescript.info/public/share/DJlSbJ_Qx080q315tWaqMWn3PixCQsOcM4Kf1IW9_Eo PDF: https://app.rescript.info/api/public/sessions/0efec296b9b6e905/pdf

More

What makes something truly *intelligent?* Is a rock an agent? Could a perfect simulation of your brain actually *be* you? In this fascinating conversation, Dr. Jeff Beck takes us on a journey through the philosophical and technical foundations of agency, intelligence, and the future of AI. Jeff doesn't hold back on the big questions. He argues that from a purely mathematical perspective, there's no structural difference between an agent and a rock – both execute policies that map inputs to outputs. The real distinction lies in *sophistication* – how complex are the internal computations? Does the system engage in planning and counterfactual reasoning, or is it just a lookup table that happens to give the right answers? *Key topics explored in this conversation:* *The Black Box Problem of Agency* – How can we tell if something is truly planning versus just executing a pre-computed response? Jeff explains why this question is nearly impossible to answer from the outside, and why the best we can do is ask which model gives us the simplest explanation. *Energy-Based Models Explained* – A masterclass on how EBMs differ from standard neural networks. The key insight: traditional networks only optimize weights, while energy-based models optimize *both* weights and internal states – a subtle but profound distinction that connects to Bayesian inference. *Why Your Brain Might Have Evolved from Your Nose* – One of the most surprising moments in the conversation. Jeff proposes that the complex, non-smooth nature of olfactory space may have driven the evolution of our associative cortex and planning abilities. *The JEPA Revolution* – A deep dive into Yann LeCun's Joint Embedding Prediction Architecture and why learning in latent space (rather than predicting every pixel) might be the key to more robust AI representations. *AI Safety Without Skynet Fears* – Jeff takes a refreshingly grounded stance on AI risk. He's less worried about rogue superintelligences and more concerned about humans becoming "reward function selectors" – couch potatoes who just approve or reject AI outputs. His proposed solution? Use inverse reinforcement learning to derive AI goals from observed human behavior, then make *small* perturbations rather than naive commands like "end world hunger." Whether you're interested in the philosophy of mind, the technical details of modern machine learning, or just want to understand what makes intelligence *tick,* this conversation delivers insights you won't find anywhere else. --- TIMESTAMPS: 00:00:00 Geometric Deep Learning & Physical Symmetries 00:00:56 Defining Agency: From Rocks to Planning 00:05:25 The Black Box Problem & Counterfactuals 00:08:45 Simulated Agency vs. Physical Reality 00:12:55 Energy-Based Models & Test-Time Training 00:17:30 Bayesian Inference & Free Energy 00:20:07 JEPA, Latent Space, & Non-Contrastive Learning 00:27:07 Evolution of Intelligence & Modular Brains 00:34:00 Scientific Discovery & Automated Experimentation 00:38:04 AI Safety, Enfeeblement & The Future of Work --- REFERENCES: Concept: [00:00:58] Free Energy Principle (FEP) https://en.wikipedia.org/wiki/Free_energy_principle [00:06:00] Monte Carlo Tree Search https://en.wikipedia.org/wiki/Monte_Carlo_tree_search Book: [00:09:00] The Intentional Stance https://mitpress.mit.edu/9780262540537/the-intentional-stance/ Paper: [00:13:00] A Tutorial on Energy-Based Learning (LeCun 2006) http://yann.lecun.com/exdb/publis/pdf/lecun-06.pdf [00:15:00] Auto-Encoding Variational Bayes (VAE) https://arxiv.org/abs/1312.6114 [00:20:15] JEPA (Joint Embedding Prediction Architecture) https://openreview.net/forum?id=BZ5a1r-kVsf [00:22:30] The Wake-Sleep Algorithm https://www.cs.toronto.edu/~hinton/absps/ws.pdf --- RESCRIPT: https://app.rescript.info/public/share/DJlSbJ_Qx080q315tWaqMWn3PixCQsOcM4Kf1IW9_Eo PDF: https://app.rescript.info/api/public/sessions/0efec296b9b6e905/pdf

Key Metrics

Back to top
Pitches sent
67
From PodPitch users
Rank
#2388
Top 4.8% by pitch volume (Rank #2388 of 50,000)
Average rating
4.6
Ratings count may be unavailable
Reviews
12
Written reviews (when available)
Publish cadence
Bi-weekly
Active monthly
Episode count
246
Data updated
Feb 10, 2026
Social followers
237.4K

Public Snapshot

Back to top
Country
United States
Language
English
Language (ISO)
Release cadence
Bi-weekly
Latest episode date
Sun Jan 25 2026

Audience & Outreach (Public)

Back to top
Audience range
200K–400K / month
Public band
Reply rate band
Under 2%
Public band
Response time band
30+ days
Public band
Replies received
1–5
Public band

Public ranges are rounded for privacy. Unlock the full report for exact values.

Presence & Signals

Back to top
Social followers
237.4K
Contact available
Yes
Masked on public pages
Sponsors detected
Private
Hidden on public pages
Guest format
Private
Hidden on public pages

Social links

No public profiles listed.

Demo to Unlock Full Outreach Intelligence

We publicly share enough context for discovery. For actionable outreach data, unlock the private blocks below.

Audience & Growth
Demo to unlock
Monthly listeners49,360
Reply rate18.2%
Avg response4.1 days
See audience size and growth. Demo to unlock.
Contact preview
t***@hidden
Get verified host contact details. Demo to unlock.
Sponsor signals
Demo to unlock
Sponsor mentionsLikely
Ad-read historyAvailable
View sponsorship signals and ad read history. Demo to unlock.
Book a demo

How To Pitch Machine Learning Street Talk (MLST)

Back to top

Want to get booked on podcasts like this?

Become the guest your future customers already trust.

PodPitch helps you find shows, draft personalized pitches, and hit send faster. We share enough public context for discovery; for actionable outreach data, unlock the private blocks.

  • Identify shows that match your audience and offer.
  • Write pitches in your voice (nothing sends without you).
  • Move from “maybe later” to booked interviews faster.
  • Unlock deeper outreach intelligence with a quick demo.

This show is Rank #2388 by pitch volume, with 67 pitches sent by PodPitch users.

Book a demoBrowse more shows10 minutes. Friendly walkthrough. No pressure.
4.6 / 5
RatingsN/A
Written reviews12

We summarize public review counts here; full review text aggregation is not shown on PodPitch yet.

Frequently Asked Questions About Machine Learning Street Talk (MLST)

Back to top

What is Machine Learning Street Talk (MLST) about?

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).

How often does Machine Learning Street Talk (MLST) publish new episodes?

Bi-weekly

How many listeners does Machine Learning Street Talk (MLST) get?

PodPitch shows a public audience band (like "200K–400K / month"). Book a demo to unlock exact audience estimates and how we calculate them.

How can I pitch Machine Learning Street Talk (MLST)?

Use PodPitch to access verified outreach details and pitch recommendations for Machine Learning Street Talk (MLST). Start at https://podpitch.com/try/1.

Which podcasts are similar to Machine Learning Street Talk (MLST)?

This page includes internal links to similar podcasts. You can also browse the full directory at https://podpitch.com/podcasts.

How do I contact Machine Learning Street Talk (MLST)?

Public pages only show a masked contact preview. Book a demo to unlock verified email and outreach fields.

Quick favor for your future self: want podcast bookings without the extra mental load? PodPitch helps you find shows, draft personalized pitches, and hit send faster.