PodcastsRank #47040
Artwork for Through Another Lens Podcast

Through Another Lens Podcast

TechnologyPodcastsBusinessENunited-statesDaily or near-daily
Rating unavailable
The podcast that flips conventional wisdom upside down. Where hidden truths become competitive advantages. <a href="https://marksylvester.substack.com?utm_medium=podcast">marksylvester.substack.com</a>
Top 94.1% by pitch volume (Rank #47040 of 50,000)Data updated Feb 10, 2026

Key Facts

Publishes
Daily or near-daily
Episodes
39
Founded
N/A
Category
Technology
Number of listeners
Private
Hidden on public pages

Listen to this Podcast

Pitch this podcast
Get the guest pitch kit.
Book a quick demo to unlock the outreach details you actually need before you hit send.
  • Verified contact + outreach fields
  • Exact listener estimates (not just bands)
  • Reply rate + response timing signals
10 minutes. Friendly walkthrough. No pressure.
Book a demo
Public snapshot
Audience: N/A
Canonical: https://podpitch.com/podcasts/through-another-lens-podcast
Cadence: Active weekly
Reply rate: Under 2%

Latest Episodes

Back to top

Before It Had a Name

Sun Feb 01 2026

Listen

This is the first of three stories about a system I didn’t know I was building. This one is about the problem. The next is about the discipline it taught me. The third is about what happens when the system meets real people with real ideas. Nine months ago, I was staring at broken content. Not obviously broken—it read like something a professional might write. But something was wrong, and I could feel it before I could name it. The article had facts that weren’t facts. Confident claims with no foundation. The AI had invented a statistic and served it like truth, and if I hadn’t known the subject myself, I would have believed it. That was the first crack. Then came the voice problem. I’d fed the system everything I’d written for years—transcripts of talks, blog posts, emails. The output sounded like a competent writer, but it didn’t sound like me. Close enough to fool strangers, not close enough to fool anyone who knew my work. Then came the slop. I didn’t have that word yet. I just knew the writing was doing something annoying—saying the same thing twice in different words, padding paragraphs with throat-clearing, using five sentences where two would land harder. I started calling it “AI tells.” Little patterns that revealed the machine behind the curtain: em-dashes where I’d use periods, parallel constructions I’d never write, a kind of false confidence that performed authority instead of earning it. Three problems. No framework. No vocabulary. Just deadlines and standards I wasn’t willing to lower. So I built gates. The Immune System I didn’t call them gates at first. I called them checks, then checkpoints. Then I realized they needed to be more than suggestions—they needed teeth. The research gate came first. Every claim verified, every statistic sourced, every quote confirmed. If it couldn’t be proven, it couldn’t ship. The voice gate came next. Someone—something—had to read every piece and strip out the patterns that weren’t mine. Not just wrong words, but wrong rhythms, wrong energy, wrong assumptions about what makes writing good. Then the slop gate. I learned that word somewhere along the way. SLOP: Superfluity, Loops, Overwrought prose, Pretension. A checklist for everything AI does when it’s trying too hard. Then more gates—engagement, editorial standards, perspective and risk. Six gates total, each with the power to stop content from shipping. I built a 38-agent system around these gates. Not because I planned to, but because each problem demanded a specialist. Research needed a researcher. Voice needed a guardian. Slop needed a detector. The team grew because the problems kept revealing themselves. By the time I was done, I had something I didn’t have a name for either. I was calling it Orchestrated Intelligence. It became the foundation of what we now build at Coastal Intelligence. The Disease Gets a Name Recently, I sat in a Section.ai workshop with thousands of other people. Section is where business leaders go to learn what’s actually happening in AI—not the hype, the practice. The topic was AI marketing, and the presenter put a term on the screen I’d never seen before: Jagged AI. The jagged frontier. The idea that AI capability doesn’t improve smoothly—it has towers and recesses. Brilliant at some things, fails in ways that don’t make sense at others, and the boundaries are unpredictable. It can pass the bar exam and fail to count the letters in a word. It can write code that works and invent citations that don’t exist. It can sound like an expert and miss what any beginner would catch. The term comes from researchers like Ethan Mollick at Wharton and Andrej Karpathy, former AI lead at Tesla—people who study how these systems actually behave in the wild, not just in demos. The presenter explained that this is why AI can’t be trusted to work alone. The jagged edge means you never know when it will fail. Human oversight isn’t optional; it’s structural. Then came the part that made me sit up. Large organizations are now hiring for this. There are people whose job is to monitor AI output for jagged failures—to catch the hallucinations, the voice drift, the slop, to stand between the machine and the audience. They’re building teams to do what I built nine months ago. The Accidental Advantage I’m not smarter than the people in that workshop. I’m not more informed. I didn’t have access to research they lacked. I had a different constraint: I was shipping. When you’re publishing every week, you can’t wait for the industry to figure out best practices. You can’t pause until someone names the problem. You encounter the failures in real time, and you either solve them or lower your standards. I wasn’t willing to lower my standards. So I built. Gate by gate, agent by agent, fix by fix. Not from theory but from necessity. That’s the accidental advantage of being a practitioner—the problems find you before the frameworks do. You’re forced to solve things that haven’t been named yet. Nine months later, the frameworks exist. The names exist. The job titles exist. And I’m sitting in a workshop realizing I’ve already built what they’re describing. The immune system came before the diagnosis. The Point If you’re building with AI and something feels wrong, trust that instinct. The terminology will catch up, the frameworks will follow. But the problems are real right now, and your solutions don’t need permission from academia to be valid. If you’re waiting for best practices before you start, you’ll always be behind. The practitioners are solving problems that won’t be named for months. By the time the workshops happen, the builders have moved on. And if you’re wondering whether it’s too late to catch up: it isn’t. The jagged frontier is still jagged. The problems are still hard. The solutions are still being invented. Some of us just started inventing a little earlier. Next week: what building that system taught me about leading humans. Mark Sylvester is a founder of Coastal Intelligence, Santa Barbara’s AI thinktank. He built EVERYWHERE, a 38-agent orchestrated intelligence platform, because he got tired of staring at broken content. Want to see where orchestrated intelligence starts? Voice DNA captures how you actually communicate—so AI can finally sound like you: https://everywhere-voicedna.lovable.app/ Get full access to Through Another Lens at marksylvester.substack.com/subscribe

More

This is the first of three stories about a system I didn’t know I was building. This one is about the problem. The next is about the discipline it taught me. The third is about what happens when the system meets real people with real ideas. Nine months ago, I was staring at broken content. Not obviously broken—it read like something a professional might write. But something was wrong, and I could feel it before I could name it. The article had facts that weren’t facts. Confident claims with no foundation. The AI had invented a statistic and served it like truth, and if I hadn’t known the subject myself, I would have believed it. That was the first crack. Then came the voice problem. I’d fed the system everything I’d written for years—transcripts of talks, blog posts, emails. The output sounded like a competent writer, but it didn’t sound like me. Close enough to fool strangers, not close enough to fool anyone who knew my work. Then came the slop. I didn’t have that word yet. I just knew the writing was doing something annoying—saying the same thing twice in different words, padding paragraphs with throat-clearing, using five sentences where two would land harder. I started calling it “AI tells.” Little patterns that revealed the machine behind the curtain: em-dashes where I’d use periods, parallel constructions I’d never write, a kind of false confidence that performed authority instead of earning it. Three problems. No framework. No vocabulary. Just deadlines and standards I wasn’t willing to lower. So I built gates. The Immune System I didn’t call them gates at first. I called them checks, then checkpoints. Then I realized they needed to be more than suggestions—they needed teeth. The research gate came first. Every claim verified, every statistic sourced, every quote confirmed. If it couldn’t be proven, it couldn’t ship. The voice gate came next. Someone—something—had to read every piece and strip out the patterns that weren’t mine. Not just wrong words, but wrong rhythms, wrong energy, wrong assumptions about what makes writing good. Then the slop gate. I learned that word somewhere along the way. SLOP: Superfluity, Loops, Overwrought prose, Pretension. A checklist for everything AI does when it’s trying too hard. Then more gates—engagement, editorial standards, perspective and risk. Six gates total, each with the power to stop content from shipping. I built a 38-agent system around these gates. Not because I planned to, but because each problem demanded a specialist. Research needed a researcher. Voice needed a guardian. Slop needed a detector. The team grew because the problems kept revealing themselves. By the time I was done, I had something I didn’t have a name for either. I was calling it Orchestrated Intelligence. It became the foundation of what we now build at Coastal Intelligence. The Disease Gets a Name Recently, I sat in a Section.ai workshop with thousands of other people. Section is where business leaders go to learn what’s actually happening in AI—not the hype, the practice. The topic was AI marketing, and the presenter put a term on the screen I’d never seen before: Jagged AI. The jagged frontier. The idea that AI capability doesn’t improve smoothly—it has towers and recesses. Brilliant at some things, fails in ways that don’t make sense at others, and the boundaries are unpredictable. It can pass the bar exam and fail to count the letters in a word. It can write code that works and invent citations that don’t exist. It can sound like an expert and miss what any beginner would catch. The term comes from researchers like Ethan Mollick at Wharton and Andrej Karpathy, former AI lead at Tesla—people who study how these systems actually behave in the wild, not just in demos. The presenter explained that this is why AI can’t be trusted to work alone. The jagged edge means you never know when it will fail. Human oversight isn’t optional; it’s structural. Then came the part that made me sit up. Large organizations are now hiring for this. There are people whose job is to monitor AI output for jagged failures—to catch the hallucinations, the voice drift, the slop, to stand between the machine and the audience. They’re building teams to do what I built nine months ago. The Accidental Advantage I’m not smarter than the people in that workshop. I’m not more informed. I didn’t have access to research they lacked. I had a different constraint: I was shipping. When you’re publishing every week, you can’t wait for the industry to figure out best practices. You can’t pause until someone names the problem. You encounter the failures in real time, and you either solve them or lower your standards. I wasn’t willing to lower my standards. So I built. Gate by gate, agent by agent, fix by fix. Not from theory but from necessity. That’s the accidental advantage of being a practitioner—the problems find you before the frameworks do. You’re forced to solve things that haven’t been named yet. Nine months later, the frameworks exist. The names exist. The job titles exist. And I’m sitting in a workshop realizing I’ve already built what they’re describing. The immune system came before the diagnosis. The Point If you’re building with AI and something feels wrong, trust that instinct. The terminology will catch up, the frameworks will follow. But the problems are real right now, and your solutions don’t need permission from academia to be valid. If you’re waiting for best practices before you start, you’ll always be behind. The practitioners are solving problems that won’t be named for months. By the time the workshops happen, the builders have moved on. And if you’re wondering whether it’s too late to catch up: it isn’t. The jagged frontier is still jagged. The problems are still hard. The solutions are still being invented. Some of us just started inventing a little earlier. Next week: what building that system taught me about leading humans. Mark Sylvester is a founder of Coastal Intelligence, Santa Barbara’s AI thinktank. He built EVERYWHERE, a 38-agent orchestrated intelligence platform, because he got tired of staring at broken content. Want to see where orchestrated intelligence starts? Voice DNA captures how you actually communicate—so AI can finally sound like you: https://everywhere-voicedna.lovable.app/ Get full access to Through Another Lens at marksylvester.substack.com/subscribe

Key Metrics

Back to top
Pitches sent
6
From PodPitch users
Rank
#47040
Top 94.1% by pitch volume (Rank #47040 of 50,000)
Average rating
N/A
Ratings count may be unavailable
Reviews
N/A
Written reviews (when available)
Publish cadence
Daily or near-daily
Active weekly
Episode count
39
Data updated
Feb 10, 2026
Social followers
871

Public Snapshot

Back to top
Country
United States
Language
English
Language (ISO)
Release cadence
Daily or near-daily
Latest episode date
Sun Feb 01 2026

Audience & Outreach (Public)

Back to top
Audience range
Private
Hidden on public pages
Reply rate band
Under 2%
Public band
Response time band
Private
Hidden on public pages
Replies received
Private
Hidden on public pages

Public ranges are rounded for privacy. Unlock the full report for exact values.

Presence & Signals

Back to top
Social followers
871
Contact available
Yes
Masked on public pages
Sponsors detected
Private
Hidden on public pages
Guest format
Private
Hidden on public pages

Social links

No public profiles listed.

Demo to Unlock Full Outreach Intelligence

We publicly share enough context for discovery. For actionable outreach data, unlock the private blocks below.

Audience & Growth
Demo to unlock
Monthly listeners49,360
Reply rate18.2%
Avg response4.1 days
See audience size and growth. Demo to unlock.
Contact preview
m***@hidden
Get verified host contact details. Demo to unlock.
Sponsor signals
Demo to unlock
Sponsor mentionsLikely
Ad-read historyAvailable
View sponsorship signals and ad read history. Demo to unlock.
Book a demo

How To Pitch Through Another Lens Podcast

Back to top

Want to get booked on podcasts like this?

Become the guest your future customers already trust.

PodPitch helps you find shows, draft personalized pitches, and hit send faster. We share enough public context for discovery; for actionable outreach data, unlock the private blocks.

  • Identify shows that match your audience and offer.
  • Write pitches in your voice (nothing sends without you).
  • Move from “maybe later” to booked interviews faster.
  • Unlock deeper outreach intelligence with a quick demo.

This show is Rank #47040 by pitch volume, with 6 pitches sent by PodPitch users.

Book a demoBrowse more shows10 minutes. Friendly walkthrough. No pressure.
Rating unavailable
RatingsN/A
Written reviewsN/A

We summarize public review counts here; full review text aggregation is not shown on PodPitch yet.

Frequently Asked Questions About Through Another Lens Podcast

Back to top

What is Through Another Lens Podcast about?

The podcast that flips conventional wisdom upside down. Where hidden truths become competitive advantages. <a href="https://marksylvester.substack.com?utm_medium=podcast">marksylvester.substack.com</a>

How often does Through Another Lens Podcast publish new episodes?

Daily or near-daily

How many listeners does Through Another Lens Podcast get?

PodPitch shows a public audience band (like "N/A"). Book a demo to unlock exact audience estimates and how we calculate them.

How can I pitch Through Another Lens Podcast?

Use PodPitch to access verified outreach details and pitch recommendations for Through Another Lens Podcast. Start at https://podpitch.com/try/1.

Which podcasts are similar to Through Another Lens Podcast?

This page includes internal links to similar podcasts. You can also browse the full directory at https://podpitch.com/podcasts.

How do I contact Through Another Lens Podcast?

Public pages only show a masked contact preview. Book a demo to unlock verified email and outreach fields.

Quick favor for your future self: want podcast bookings without the extra mental load? PodPitch helps you find shows, draft personalized pitches, and hit send faster.