Before It Had a Name
Sun Feb 01 2026
This is the first of three stories about a system I didn’t know I was building. This one is about the problem. The next is about the discipline it taught me. The third is about what happens when the system meets real people with real ideas.
Nine months ago, I was staring at broken content. Not obviously broken—it read like something a professional might write. But something was wrong, and I could feel it before I could name it.
The article had facts that weren’t facts. Confident claims with no foundation. The AI had invented a statistic and served it like truth, and if I hadn’t known the subject myself, I would have believed it. That was the first crack.
Then came the voice problem. I’d fed the system everything I’d written for years—transcripts of talks, blog posts, emails. The output sounded like a competent writer, but it didn’t sound like me. Close enough to fool strangers, not close enough to fool anyone who knew my work.
Then came the slop. I didn’t have that word yet. I just knew the writing was doing something annoying—saying the same thing twice in different words, padding paragraphs with throat-clearing, using five sentences where two would land harder. I started calling it “AI tells.” Little patterns that revealed the machine behind the curtain: em-dashes where I’d use periods, parallel constructions I’d never write, a kind of false confidence that performed authority instead of earning it.
Three problems. No framework. No vocabulary. Just deadlines and standards I wasn’t willing to lower. So I built gates.
The Immune System
I didn’t call them gates at first. I called them checks, then checkpoints. Then I realized they needed to be more than suggestions—they needed teeth.
The research gate came first. Every claim verified, every statistic sourced, every quote confirmed. If it couldn’t be proven, it couldn’t ship. The voice gate came next. Someone—something—had to read every piece and strip out the patterns that weren’t mine. Not just wrong words, but wrong rhythms, wrong energy, wrong assumptions about what makes writing good.
Then the slop gate. I learned that word somewhere along the way. SLOP: Superfluity, Loops, Overwrought prose, Pretension. A checklist for everything AI does when it’s trying too hard. Then more gates—engagement, editorial standards, perspective and risk. Six gates total, each with the power to stop content from shipping.
I built a 38-agent system around these gates. Not because I planned to, but because each problem demanded a specialist. Research needed a researcher. Voice needed a guardian. Slop needed a detector. The team grew because the problems kept revealing themselves. By the time I was done, I had something I didn’t have a name for either. I was calling it Orchestrated Intelligence. It became the foundation of what we now build at Coastal Intelligence.
The Disease Gets a Name
Recently, I sat in a Section.ai workshop with thousands of other people. Section is where business leaders go to learn what’s actually happening in AI—not the hype, the practice. The topic was AI marketing, and the presenter put a term on the screen I’d never seen before: Jagged AI.
The jagged frontier. The idea that AI capability doesn’t improve smoothly—it has towers and recesses. Brilliant at some things, fails in ways that don’t make sense at others, and the boundaries are unpredictable. It can pass the bar exam and fail to count the letters in a word. It can write code that works and invent citations that don’t exist. It can sound like an expert and miss what any beginner would catch.
The term comes from researchers like Ethan Mollick at Wharton and Andrej Karpathy, former AI lead at Tesla—people who study how these systems actually behave in the wild, not just in demos. The presenter explained that this is why AI can’t be trusted to work alone. The jagged edge means you never know when it will fail. Human oversight isn’t optional; it’s structural.
Then came the part that made me sit up. Large organizations are now hiring for this. There are people whose job is to monitor AI output for jagged failures—to catch the hallucinations, the voice drift, the slop, to stand between the machine and the audience. They’re building teams to do what I built nine months ago.
The Accidental Advantage
I’m not smarter than the people in that workshop. I’m not more informed. I didn’t have access to research they lacked. I had a different constraint: I was shipping.
When you’re publishing every week, you can’t wait for the industry to figure out best practices. You can’t pause until someone names the problem. You encounter the failures in real time, and you either solve them or lower your standards. I wasn’t willing to lower my standards.
So I built. Gate by gate, agent by agent, fix by fix. Not from theory but from necessity. That’s the accidental advantage of being a practitioner—the problems find you before the frameworks do. You’re forced to solve things that haven’t been named yet.
Nine months later, the frameworks exist. The names exist. The job titles exist. And I’m sitting in a workshop realizing I’ve already built what they’re describing. The immune system came before the diagnosis.
The Point
If you’re building with AI and something feels wrong, trust that instinct. The terminology will catch up, the frameworks will follow. But the problems are real right now, and your solutions don’t need permission from academia to be valid.
If you’re waiting for best practices before you start, you’ll always be behind. The practitioners are solving problems that won’t be named for months. By the time the workshops happen, the builders have moved on. And if you’re wondering whether it’s too late to catch up: it isn’t. The jagged frontier is still jagged. The problems are still hard. The solutions are still being invented.
Some of us just started inventing a little earlier.
Next week: what building that system taught me about leading humans.
Mark Sylvester is a founder of Coastal Intelligence, Santa Barbara’s AI thinktank. He built EVERYWHERE, a 38-agent orchestrated intelligence platform, because he got tired of staring at broken content.
Want to see where orchestrated intelligence starts? Voice DNA captures how you actually communicate—so AI can finally sound like you:
https://everywhere-voicedna.lovable.app/
Get full access to Through Another Lens at marksylvester.substack.com/subscribe
More
This is the first of three stories about a system I didn’t know I was building. This one is about the problem. The next is about the discipline it taught me. The third is about what happens when the system meets real people with real ideas. Nine months ago, I was staring at broken content. Not obviously broken—it read like something a professional might write. But something was wrong, and I could feel it before I could name it. The article had facts that weren’t facts. Confident claims with no foundation. The AI had invented a statistic and served it like truth, and if I hadn’t known the subject myself, I would have believed it. That was the first crack. Then came the voice problem. I’d fed the system everything I’d written for years—transcripts of talks, blog posts, emails. The output sounded like a competent writer, but it didn’t sound like me. Close enough to fool strangers, not close enough to fool anyone who knew my work. Then came the slop. I didn’t have that word yet. I just knew the writing was doing something annoying—saying the same thing twice in different words, padding paragraphs with throat-clearing, using five sentences where two would land harder. I started calling it “AI tells.” Little patterns that revealed the machine behind the curtain: em-dashes where I’d use periods, parallel constructions I’d never write, a kind of false confidence that performed authority instead of earning it. Three problems. No framework. No vocabulary. Just deadlines and standards I wasn’t willing to lower. So I built gates. The Immune System I didn’t call them gates at first. I called them checks, then checkpoints. Then I realized they needed to be more than suggestions—they needed teeth. The research gate came first. Every claim verified, every statistic sourced, every quote confirmed. If it couldn’t be proven, it couldn’t ship. The voice gate came next. Someone—something—had to read every piece and strip out the patterns that weren’t mine. Not just wrong words, but wrong rhythms, wrong energy, wrong assumptions about what makes writing good. Then the slop gate. I learned that word somewhere along the way. SLOP: Superfluity, Loops, Overwrought prose, Pretension. A checklist for everything AI does when it’s trying too hard. Then more gates—engagement, editorial standards, perspective and risk. Six gates total, each with the power to stop content from shipping. I built a 38-agent system around these gates. Not because I planned to, but because each problem demanded a specialist. Research needed a researcher. Voice needed a guardian. Slop needed a detector. The team grew because the problems kept revealing themselves. By the time I was done, I had something I didn’t have a name for either. I was calling it Orchestrated Intelligence. It became the foundation of what we now build at Coastal Intelligence. The Disease Gets a Name Recently, I sat in a Section.ai workshop with thousands of other people. Section is where business leaders go to learn what’s actually happening in AI—not the hype, the practice. The topic was AI marketing, and the presenter put a term on the screen I’d never seen before: Jagged AI. The jagged frontier. The idea that AI capability doesn’t improve smoothly—it has towers and recesses. Brilliant at some things, fails in ways that don’t make sense at others, and the boundaries are unpredictable. It can pass the bar exam and fail to count the letters in a word. It can write code that works and invent citations that don’t exist. It can sound like an expert and miss what any beginner would catch. The term comes from researchers like Ethan Mollick at Wharton and Andrej Karpathy, former AI lead at Tesla—people who study how these systems actually behave in the wild, not just in demos. The presenter explained that this is why AI can’t be trusted to work alone. The jagged edge means you never know when it will fail. Human oversight isn’t optional; it’s structural. Then came the part that made me sit up. Large organizations are now hiring for this. There are people whose job is to monitor AI output for jagged failures—to catch the hallucinations, the voice drift, the slop, to stand between the machine and the audience. They’re building teams to do what I built nine months ago. The Accidental Advantage I’m not smarter than the people in that workshop. I’m not more informed. I didn’t have access to research they lacked. I had a different constraint: I was shipping. When you’re publishing every week, you can’t wait for the industry to figure out best practices. You can’t pause until someone names the problem. You encounter the failures in real time, and you either solve them or lower your standards. I wasn’t willing to lower my standards. So I built. Gate by gate, agent by agent, fix by fix. Not from theory but from necessity. That’s the accidental advantage of being a practitioner—the problems find you before the frameworks do. You’re forced to solve things that haven’t been named yet. Nine months later, the frameworks exist. The names exist. The job titles exist. And I’m sitting in a workshop realizing I’ve already built what they’re describing. The immune system came before the diagnosis. The Point If you’re building with AI and something feels wrong, trust that instinct. The terminology will catch up, the frameworks will follow. But the problems are real right now, and your solutions don’t need permission from academia to be valid. If you’re waiting for best practices before you start, you’ll always be behind. The practitioners are solving problems that won’t be named for months. By the time the workshops happen, the builders have moved on. And if you’re wondering whether it’s too late to catch up: it isn’t. The jagged frontier is still jagged. The problems are still hard. The solutions are still being invented. Some of us just started inventing a little earlier. Next week: what building that system taught me about leading humans. Mark Sylvester is a founder of Coastal Intelligence, Santa Barbara’s AI thinktank. He built EVERYWHERE, a 38-agent orchestrated intelligence platform, because he got tired of staring at broken content. Want to see where orchestrated intelligence starts? Voice DNA captures how you actually communicate—so AI can finally sound like you: https://everywhere-voicedna.lovable.app/ Get full access to Through Another Lens at marksylvester.substack.com/subscribe