PodcastsRank #2092
Artwork for The Machine Learning Podcast

The Machine Learning Podcast

TechnologyPodcastsEducationENunited-statesDaily or near-daily
4.4 / 5
This show is your guidebook to building scalable and maintainable AI systems. You will learn how to architect AI applications, apply AI to your work, and the considerations involved in building or customizing new models. Everything that you need to know to deliver real impact and value with machine learning and artificial intelligence.
Top 4.2% by pitch volume (Rank #2092 of 50,000)Data updated Feb 10, 2026

Key Facts

Publishes
Daily or near-daily
Episodes
76
Founded
N/A
Category
Technology
Number of listeners
Private
Hidden on public pages

Listen to this Podcast

Pitch this podcast
Get the guest pitch kit.
Book a quick demo to unlock the outreach details you actually need before you hit send.
  • Verified contact + outreach fields
  • Exact listener estimates (not just bands)
  • Reply rate + response timing signals
10 minutes. Friendly walkthrough. No pressure.
Book a demo
Public snapshot
Audience: 8K–20K / month
Canonical: https://podpitch.com/podcasts/the-machine-learning-podcast
Cadence: Active monthly
Reply rate: 35%+

Latest Episodes

Back to top

GPU Clouds, Aggregators, and the New Economics of AI Compute

Tue Jan 27 2026

Listen

Summary  In this episode I sit down with Hugo Shi, co-founder and CTO of Saturn Cloud, to map the strategic realities of sourcing and operating GPUs across clouds. Hugo breaks down today’s provider landscape—from hyperscalers to full-service GPU clouds, bare metal/concierge providers, and emerging GPU aggregators—and how to choose among them based on security posture, managed services, and cost. We explore practical layers of capability (compute, orchestration with Kubernetes/Slurm, storage, networking, and managed services), the trade-offs of portability on “Kubernetes-native” stacks, and the persistent challenge of data gravity. We also discuss current supply dynamics, the growing availability of on-demand capacity as newer chips roll out, and how AMD’s ecosystem is maturing as real competition to NVIDIA. Hugo shares patterns for separating training and inference across providers, why traditional ML is far from dead, and how usage varies wildly across domains like biotech. We close with predictions on consolidation, full‑stack experiences from GPU clouds, financial-style GPU marketplaces, and much-needed advances in reliability for long-running GPU jobs.  Announcements  Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsUnlock the full potential of your AI workloads with a seamless and composable data infrastructure. Bruin is an open source framework that streamlines integration from the command line, allowing you to focus on what matters most - building intelligent systems. Write Python code for your business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. With native support for ML/AI workloads, Bruin empowers data teams to deliver faster, more reliable, and scalable AI solutions. Harness Bruin's connectors for hundreds of platforms, including popular machine learning frameworks like TensorFlow and PyTorch. Build end-to-end AI workflows that integrate seamlessly with your existing tech stack. Join the ranks of forward-thinking organizations that are revolutionizing their data engineering with Bruin. Get started today at aiengineeringpodcast.com/bruin, and for dbt Cloud customers, enjoy a $1,000 credit to migrate to Bruin Cloud.Your host is Tobias Macey and today I'm interviewing Hugo Shi about the strategic realities of sourcing GPUs in the cloud for your training and inference workloads Interview IntroductionHow did you get involved in machine learning?Can you start by giving a summary of your understanding of the current market for "cloud" GPUs?How would you characterize the customer base for the "neocloud" providers?How is the access to the GPU compute typically mediated?The predominant cloud providers (AWS, GCP, Azure) have gained market share by offering numerous differentiated services and ease-of-use features. What are the types of services that you might expect from a GPU provider?The "cloud-native" ecosystem was developed with the promise of enabling workload portability, but the realities are often more complicated. What are some of the difficulties that teams encounter when trying to adapt their workloads to these different cloud providers?What are the toolchains/frameworks/architectures that you are seeing as most effective at adapting to these different compute environments?One of the major themes in the 2010s that worked against multi-cloud strategies was the idea of "data gravity". What are the strategies that teams are using to mitigate that tax on their workloads?That is a more substantial impact when dealing with training workloads than for inference compute. How are you seeing teams think about the balance of cost savings vs. operational complexity for those different workloads?What are the most interesting, innovative, or unexpected ways that you have seen teams capitalize on GPU capacity across these new providers?What are the most interesting, unexpected, or challenging lessons that you have learned while working on enabling teams to execute workloads on these neoclouds?When is a "neocloud" or "GPU cloud" provider the wrong choice?What are your predictions for the future evolutions of GPU-as-a-service as hardware availability improves and model architectures become more efficient? Contact Info LinkedIn Parting Question From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers. Links Saturn CloudPandasNumPyMatLabAWSGCPAzureOracle CloudRunPodFluidStackSFComputeKubeFlowLightning AIDStackMetaflowFlyteArya AIDagsterCoreweaveVultrNebiusVast.aiWekaVast DataSlurmCNCF == Cloud-Native Computing FoundationKubernetesTerraformECSHelm ChartBlock StorageObject StorageContainer RegistryCrusoeAlluxioData VirtualizationGB300H100Spot InstanceAWS TrainiumGoogle TPU (Tensor Processing Unit)AMDROCMPyTorchGoogle Vertex AIAWS BedrockCUDA PythonMojoXGBoostRandom ForestLudwig - Uber Deep Learning AutoMLPaperspaceVoltage ParkWeights & Biases The intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0

More

Summary  In this episode I sit down with Hugo Shi, co-founder and CTO of Saturn Cloud, to map the strategic realities of sourcing and operating GPUs across clouds. Hugo breaks down today’s provider landscape—from hyperscalers to full-service GPU clouds, bare metal/concierge providers, and emerging GPU aggregators—and how to choose among them based on security posture, managed services, and cost. We explore practical layers of capability (compute, orchestration with Kubernetes/Slurm, storage, networking, and managed services), the trade-offs of portability on “Kubernetes-native” stacks, and the persistent challenge of data gravity. We also discuss current supply dynamics, the growing availability of on-demand capacity as newer chips roll out, and how AMD’s ecosystem is maturing as real competition to NVIDIA. Hugo shares patterns for separating training and inference across providers, why traditional ML is far from dead, and how usage varies wildly across domains like biotech. We close with predictions on consolidation, full‑stack experiences from GPU clouds, financial-style GPU marketplaces, and much-needed advances in reliability for long-running GPU jobs.  Announcements  Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsUnlock the full potential of your AI workloads with a seamless and composable data infrastructure. Bruin is an open source framework that streamlines integration from the command line, allowing you to focus on what matters most - building intelligent systems. Write Python code for your business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. With native support for ML/AI workloads, Bruin empowers data teams to deliver faster, more reliable, and scalable AI solutions. Harness Bruin's connectors for hundreds of platforms, including popular machine learning frameworks like TensorFlow and PyTorch. Build end-to-end AI workflows that integrate seamlessly with your existing tech stack. Join the ranks of forward-thinking organizations that are revolutionizing their data engineering with Bruin. Get started today at aiengineeringpodcast.com/bruin, and for dbt Cloud customers, enjoy a $1,000 credit to migrate to Bruin Cloud.Your host is Tobias Macey and today I'm interviewing Hugo Shi about the strategic realities of sourcing GPUs in the cloud for your training and inference workloads Interview IntroductionHow did you get involved in machine learning?Can you start by giving a summary of your understanding of the current market for "cloud" GPUs?How would you characterize the customer base for the "neocloud" providers?How is the access to the GPU compute typically mediated?The predominant cloud providers (AWS, GCP, Azure) have gained market share by offering numerous differentiated services and ease-of-use features. What are the types of services that you might expect from a GPU provider?The "cloud-native" ecosystem was developed with the promise of enabling workload portability, but the realities are often more complicated. What are some of the difficulties that teams encounter when trying to adapt their workloads to these different cloud providers?What are the toolchains/frameworks/architectures that you are seeing as most effective at adapting to these different compute environments?One of the major themes in the 2010s that worked against multi-cloud strategies was the idea of "data gravity". What are the strategies that teams are using to mitigate that tax on their workloads?That is a more substantial impact when dealing with training workloads than for inference compute. How are you seeing teams think about the balance of cost savings vs. operational complexity for those different workloads?What are the most interesting, innovative, or unexpected ways that you have seen teams capitalize on GPU capacity across these new providers?What are the most interesting, unexpected, or challenging lessons that you have learned while working on enabling teams to execute workloads on these neoclouds?When is a "neocloud" or "GPU cloud" provider the wrong choice?What are your predictions for the future evolutions of GPU-as-a-service as hardware availability improves and model architectures become more efficient? Contact Info LinkedIn Parting Question From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers. Links Saturn CloudPandasNumPyMatLabAWSGCPAzureOracle CloudRunPodFluidStackSFComputeKubeFlowLightning AIDStackMetaflowFlyteArya AIDagsterCoreweaveVultrNebiusVast.aiWekaVast DataSlurmCNCF == Cloud-Native Computing FoundationKubernetesTerraformECSHelm ChartBlock StorageObject StorageContainer RegistryCrusoeAlluxioData VirtualizationGB300H100Spot InstanceAWS TrainiumGoogle TPU (Tensor Processing Unit)AMDROCMPyTorchGoogle Vertex AIAWS BedrockCUDA PythonMojoXGBoostRandom ForestLudwig - Uber Deep Learning AutoMLPaperspaceVoltage ParkWeights & Biases The intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0

Key Metrics

Back to top
Pitches sent
71
From PodPitch users
Rank
#2092
Top 4.2% by pitch volume (Rank #2092 of 50,000)
Average rating
4.4
Ratings count may be unavailable
Reviews
1
Written reviews (when available)
Publish cadence
Daily or near-daily
Active monthly
Episode count
76
Data updated
Feb 10, 2026
Social followers
1.9K

Public Snapshot

Back to top
Country
United States
Language
English
Language (ISO)
Release cadence
Daily or near-daily
Latest episode date
Tue Jan 27 2026

Audience & Outreach (Public)

Back to top
Audience range
8K–20K / month
Public band
Reply rate band
35%+
Public band
Response time band
30+ days
Public band
Replies received
6–20
Public band

Public ranges are rounded for privacy. Unlock the full report for exact values.

Presence & Signals

Back to top
Social followers
1.9K
Contact available
Yes
Masked on public pages
Sponsors detected
Yes
Guest format
Yes

Social links

No public profiles listed.

Demo to Unlock Full Outreach Intelligence

We publicly share enough context for discovery. For actionable outreach data, unlock the private blocks below.

Audience & Growth
Demo to unlock
Monthly listeners49,360
Reply rate18.2%
Avg response4.1 days
See audience size and growth. Demo to unlock.
Contact preview
h***@hidden
Get verified host contact details. Demo to unlock.
Sponsor signals
Demo to unlock
Sponsor mentionsLikely
Ad-read historyAvailable
View sponsorship signals and ad read history. Demo to unlock.
Book a demo

How To Pitch The Machine Learning Podcast

Back to top

Want to get booked on podcasts like this?

Become the guest your future customers already trust.

PodPitch helps you find shows, draft personalized pitches, and hit send faster. We share enough public context for discovery; for actionable outreach data, unlock the private blocks.

  • Identify shows that match your audience and offer.
  • Write pitches in your voice (nothing sends without you).
  • Move from “maybe later” to booked interviews faster.
  • Unlock deeper outreach intelligence with a quick demo.

This show is Rank #2092 by pitch volume, with 71 pitches sent by PodPitch users.

Book a demoBrowse more shows10 minutes. Friendly walkthrough. No pressure.
4.4 / 5
RatingsN/A
Written reviews1

We summarize public review counts here; full review text aggregation is not shown on PodPitch yet.

Frequently Asked Questions About The Machine Learning Podcast

Back to top

What is The Machine Learning Podcast about?

This show is your guidebook to building scalable and maintainable AI systems. You will learn how to architect AI applications, apply AI to your work, and the considerations involved in building or customizing new models. Everything that you need to know to deliver real impact and value with machine learning and artificial intelligence.

How often does The Machine Learning Podcast publish new episodes?

Daily or near-daily

How many listeners does The Machine Learning Podcast get?

PodPitch shows a public audience band (like "8K–20K / month"). Book a demo to unlock exact audience estimates and how we calculate them.

How can I pitch The Machine Learning Podcast?

Use PodPitch to access verified outreach details and pitch recommendations for The Machine Learning Podcast. Start at https://podpitch.com/try/1.

Which podcasts are similar to The Machine Learning Podcast?

This page includes internal links to similar podcasts. You can also browse the full directory at https://podpitch.com/podcasts.

How do I contact The Machine Learning Podcast?

Public pages only show a masked contact preview. Book a demo to unlock verified email and outreach fields.

Quick favor for your future self: want podcast bookings without the extra mental load? PodPitch helps you find shows, draft personalized pitches, and hit send faster.