PodcastsRank #19113
Artwork for Stewart Squared
TechnologyPodcastsBusinessEntrepreneurshipENunited-statesDaily or near-daily
Rating unavailable
Stewart Alsop III reviews a broad range of topics with his father Stewart Alsop II, who started his career in the personal computer industry and is still actively involved in investing in startup technology companies. Stewart Alsop III is fascinated by what his father was doing as SAIII was growing up in the Golden Age of Silicon Valley. Topics include: - How the personal computing revolution led to the internet, which led to the mobile revolution - Now we are covering the future of the internet and computing - How AI ties the personal computer, the smartphone and the internet together
Top 38.2% by pitch volume (Rank #19113 of 50,000)Data updated Feb 10, 2026

Key Facts

Publishes
Daily or near-daily
Episodes
75
Founded
N/A
Category
Technology
Number of listeners
Private
Hidden on public pages

Listen to this Podcast

Pitch this podcast
Get the guest pitch kit.
Book a quick demo to unlock the outreach details you actually need before you hit send.
  • Verified contact + outreach fields
  • Exact listener estimates (not just bands)
  • Reply rate + response timing signals
10 minutes. Friendly walkthrough. No pressure.
Book a demo
Public snapshot
Audience: Under 4K / month
Canonical: https://podpitch.com/podcasts/stewart-squared
Cadence: Active weekly
Reply rate: 35%+

Latest Episodes

Back to top

Episode #75: The Real-Time Problem: Why LLMs Hit a Wall and World Models Won't

Thu Feb 05 2026

Listen

In this episode of the Stewart Squared podcast, host Stewart Alsop III sits down with his father Stewart Alsop II to explore the emerging field of world models and their potential to eclipse large language models as the future of AI development. Stewart II shares insights from his newsletter "What Matters? (to me)" available at salsop.substack.com, where he argues that the industry has already maxed out the LLM approach and needs to shift focus toward world models—a position championed by Yann LeCun. The conversation covers everything from the strategic missteps of Meta and the dominance of Google's Gemini to the technical differences between simulation-based world models for movies, robotics applications requiring real-world interaction, and military or infrastructure use cases like air traffic control. They also discuss how world models use fundamentally different data types including pixels, Gaussian splats, and time-based movement data, and question whether the GPU-centric infrastructure that powered the LLM boom will even be necessary for this next phase of AI development. Listeners can find the full article mentioned in this episode, "Dear Hollywood: Resistance is Futile", at https://salsop.substack.com/p/dear-hollywood-resistance-is-futile. Timestamps 00:00 Introduction to World Models01:17 The Limitations of LLMs07:41 The Future of AI: World Models19:04 Real-Time Data and World Models25:12 The Competitive Landscape of AI26:58 Understanding Processing Units: GPUs, TPUs, and ASICs29:17 The Philosophical Implications of Rapid Tech Change33:24 Intellectual Property and Patent Strategies in Tech44:12 China's Impact on Global Intellectual Property Key Insights 1. The Era of Large Language Models Has PeakedThe fundamental architecture of LLMs—predicting the next token from massive text datasets—has reached its optimization limit. Google's Gemini has essentially won the LLM race by integrating images, text, and coding capabilities, while Anthropic has captured the coding niche with Claude. The industry's continued investment in larger LLMs represents backward-looking strategy rather than innovation. Meta's decision to pursue another text-based LLM despite having early access to world model research exemplifies poor strategic thinking—solving yesterday's problem instead of anticipating tomorrow's challenges.2. World Models Represent the Next Paradigm ShiftWorld models fundamentally differ from LLMs by incorporating multiple data types beyond text, including pixels, Gaussian splats, time, and movement. Rather than reverting to the mean like LLMs trained on historical data, world models attempt to understand and simulate how the real world actually works. This represents Yann LeCun's vision for moving from generative AI toward artificial general intelligence, requiring an entirely different technological approach than simply building bigger language models.3. Three Distinct Categories of World Models Are EmergingWorld models are being developed for fundamentally different purposes: creating realistic video content (like OpenAI's Sora), enabling robotics and autonomous vehicles to navigate the physical world, and simulating complex real-world systems like air traffic control or military operations. Each category has unique requirements and challenges. Companies like Niantic Spatial are building geolocation-based world models from massive crowdsourced data, while Maxar is creating visual models of the entire planet for both commercial and military applications.4. The Hardware Infrastructure May Completely ChangeThe GPU-centric data center architecture optimized for LLM training may not be ideal for world models. Unlike LLMs which require brute-force processing of massive text datasets through tightly coupled GPU clusters, world models might benefit from distributed computing architectures using alternative processors like TPUs (Tensor Processing Units) or even FPGAs. This could represent another paradigm shift similar to when Nvidia pivoted from gaming graphics to AI processing, potentially creating opportunities for new hardware winners.5. Intellectual Property Strategy Faces Fundamental DisruptionThe traditional patent portfolio approach that has governed technology competition may not apply to AI systems. The rapid development cycle enabled by AI coding tools, combined with the conceptual difficulty of patenting software versus hardware, raises questions about whether patents remain effective protective mechanisms. China's disregard for intellectual property combined with its manufacturing superiority further complicates this landscape, particularly as AI accelerates the speed at which novel applications can be developed and deployed.6. Real-Time Performance Defines Competitive AdvantageTechnologies like Twitch's live streaming demonstrate that execution excellence often matters more than patents. World models require constant real-time updates across multiple data types as everything in the physical world continuously changes. This emphasis on real-time performance and distributed systems represents a core technical challenge that differs fundamentally from the batch processing approach of LLM training. Companies that master real-time world modeling may gain advantages that patents alone cannot protect.7. The Technology Is Moving Faster Than Individual ComprehensionEven veteran technology observers with 50 years of experience find the current pace of AI development challenging to track. The emergence of "vibe coding" enables non-programmers to build functional applications through natural language, while specialized knowledge about components like Gaussian splats, ASICs, and distributed architectures becomes increasingly esoteric. This knowledge fragmentation creates a divergence between technologists deeply engaged with these developments and the broader population, potentially representing an early phase of technological singularity.

More

In this episode of the Stewart Squared podcast, host Stewart Alsop III sits down with his father Stewart Alsop II to explore the emerging field of world models and their potential to eclipse large language models as the future of AI development. Stewart II shares insights from his newsletter "What Matters? (to me)" available at salsop.substack.com, where he argues that the industry has already maxed out the LLM approach and needs to shift focus toward world models—a position championed by Yann LeCun. The conversation covers everything from the strategic missteps of Meta and the dominance of Google's Gemini to the technical differences between simulation-based world models for movies, robotics applications requiring real-world interaction, and military or infrastructure use cases like air traffic control. They also discuss how world models use fundamentally different data types including pixels, Gaussian splats, and time-based movement data, and question whether the GPU-centric infrastructure that powered the LLM boom will even be necessary for this next phase of AI development. Listeners can find the full article mentioned in this episode, "Dear Hollywood: Resistance is Futile", at https://salsop.substack.com/p/dear-hollywood-resistance-is-futile. Timestamps 00:00 Introduction to World Models01:17 The Limitations of LLMs07:41 The Future of AI: World Models19:04 Real-Time Data and World Models25:12 The Competitive Landscape of AI26:58 Understanding Processing Units: GPUs, TPUs, and ASICs29:17 The Philosophical Implications of Rapid Tech Change33:24 Intellectual Property and Patent Strategies in Tech44:12 China's Impact on Global Intellectual Property Key Insights 1. The Era of Large Language Models Has PeakedThe fundamental architecture of LLMs—predicting the next token from massive text datasets—has reached its optimization limit. Google's Gemini has essentially won the LLM race by integrating images, text, and coding capabilities, while Anthropic has captured the coding niche with Claude. The industry's continued investment in larger LLMs represents backward-looking strategy rather than innovation. Meta's decision to pursue another text-based LLM despite having early access to world model research exemplifies poor strategic thinking—solving yesterday's problem instead of anticipating tomorrow's challenges.2. World Models Represent the Next Paradigm ShiftWorld models fundamentally differ from LLMs by incorporating multiple data types beyond text, including pixels, Gaussian splats, time, and movement. Rather than reverting to the mean like LLMs trained on historical data, world models attempt to understand and simulate how the real world actually works. This represents Yann LeCun's vision for moving from generative AI toward artificial general intelligence, requiring an entirely different technological approach than simply building bigger language models.3. Three Distinct Categories of World Models Are EmergingWorld models are being developed for fundamentally different purposes: creating realistic video content (like OpenAI's Sora), enabling robotics and autonomous vehicles to navigate the physical world, and simulating complex real-world systems like air traffic control or military operations. Each category has unique requirements and challenges. Companies like Niantic Spatial are building geolocation-based world models from massive crowdsourced data, while Maxar is creating visual models of the entire planet for both commercial and military applications.4. The Hardware Infrastructure May Completely ChangeThe GPU-centric data center architecture optimized for LLM training may not be ideal for world models. Unlike LLMs which require brute-force processing of massive text datasets through tightly coupled GPU clusters, world models might benefit from distributed computing architectures using alternative processors like TPUs (Tensor Processing Units) or even FPGAs. This could represent another paradigm shift similar to when Nvidia pivoted from gaming graphics to AI processing, potentially creating opportunities for new hardware winners.5. Intellectual Property Strategy Faces Fundamental DisruptionThe traditional patent portfolio approach that has governed technology competition may not apply to AI systems. The rapid development cycle enabled by AI coding tools, combined with the conceptual difficulty of patenting software versus hardware, raises questions about whether patents remain effective protective mechanisms. China's disregard for intellectual property combined with its manufacturing superiority further complicates this landscape, particularly as AI accelerates the speed at which novel applications can be developed and deployed.6. Real-Time Performance Defines Competitive AdvantageTechnologies like Twitch's live streaming demonstrate that execution excellence often matters more than patents. World models require constant real-time updates across multiple data types as everything in the physical world continuously changes. This emphasis on real-time performance and distributed systems represents a core technical challenge that differs fundamentally from the batch processing approach of LLM training. Companies that master real-time world modeling may gain advantages that patents alone cannot protect.7. The Technology Is Moving Faster Than Individual ComprehensionEven veteran technology observers with 50 years of experience find the current pace of AI development challenging to track. The emergence of "vibe coding" enables non-programmers to build functional applications through natural language, while specialized knowledge about components like Gaussian splats, ASICs, and distributed architectures becomes increasingly esoteric. This knowledge fragmentation creates a divergence between technologists deeply engaged with these developments and the broader population, potentially representing an early phase of technological singularity.

Key Metrics

Back to top
Pitches sent
18
From PodPitch users
Rank
#19113
Top 38.2% by pitch volume (Rank #19113 of 50,000)
Average rating
N/A
Ratings count may be unavailable
Reviews
N/A
Written reviews (when available)
Publish cadence
Daily or near-daily
Active weekly
Episode count
75
Data updated
Feb 10, 2026
Social followers
13.1K

Public Snapshot

Back to top
Country
United States
Language
English
Language (ISO)
Release cadence
Daily or near-daily
Latest episode date
Thu Feb 05 2026

Audience & Outreach (Public)

Back to top
Audience range
Under 4K / month
Public band
Reply rate band
35%+
Public band
Response time band
1–2 days
Public band
Replies received
6–20
Public band

Public ranges are rounded for privacy. Unlock the full report for exact values.

Presence & Signals

Back to top
Social followers
13.1K
Contact available
Yes
Masked on public pages
Sponsors detected
Private
Hidden on public pages
Guest format
Private
Hidden on public pages

Social links

No public profiles listed.

Demo to Unlock Full Outreach Intelligence

We publicly share enough context for discovery. For actionable outreach data, unlock the private blocks below.

Audience & Growth
Demo to unlock
Monthly listeners49,360
Reply rate18.2%
Avg response4.1 days
See audience size and growth. Demo to unlock.
Contact preview
s***@hidden
Get verified host contact details. Demo to unlock.
Sponsor signals
Demo to unlock
Sponsor mentionsLikely
Ad-read historyAvailable
View sponsorship signals and ad read history. Demo to unlock.
Book a demo

How To Pitch Stewart Squared

Back to top

Want to get booked on podcasts like this?

Become the guest your future customers already trust.

PodPitch helps you find shows, draft personalized pitches, and hit send faster. We share enough public context for discovery; for actionable outreach data, unlock the private blocks.

  • Identify shows that match your audience and offer.
  • Write pitches in your voice (nothing sends without you).
  • Move from “maybe later” to booked interviews faster.
  • Unlock deeper outreach intelligence with a quick demo.

This show is Rank #19113 by pitch volume, with 18 pitches sent by PodPitch users.

Book a demoBrowse more shows10 minutes. Friendly walkthrough. No pressure.
Rating unavailable
RatingsN/A
Written reviewsN/A

We summarize public review counts here; full review text aggregation is not shown on PodPitch yet.

Frequently Asked Questions About Stewart Squared

Back to top

What is Stewart Squared about?

Stewart Alsop III reviews a broad range of topics with his father Stewart Alsop II, who started his career in the personal computer industry and is still actively involved in investing in startup technology companies. Stewart Alsop III is fascinated by what his father was doing as SAIII was growing up in the Golden Age of Silicon Valley. Topics include: - How the personal computing revolution led to the internet, which led to the mobile revolution - Now we are covering the future of the internet and computing - How AI ties the personal computer, the smartphone and the internet together

How often does Stewart Squared publish new episodes?

Daily or near-daily

How many listeners does Stewart Squared get?

PodPitch shows a public audience band (like "Under 4K / month"). Book a demo to unlock exact audience estimates and how we calculate them.

How can I pitch Stewart Squared?

Use PodPitch to access verified outreach details and pitch recommendations for Stewart Squared. Start at https://podpitch.com/try/1.

Which podcasts are similar to Stewart Squared?

This page includes internal links to similar podcasts. You can also browse the full directory at https://podpitch.com/podcasts.

How do I contact Stewart Squared?

Public pages only show a masked contact preview. Book a demo to unlock verified email and outreach fields.

Quick favor for your future self: want podcast bookings without the extra mental load? PodPitch helps you find shows, draft personalized pitches, and hit send faster.