PodcastsRank #30478
Artwork for The Content Strategy Experts - Scriptorium

The Content Strategy Experts - Scriptorium

BusinessPodcastsEN-USunited-states
4.3 / 5
The content strategy experts at Scriptorium discuss how to manage, structure, organize, and distribute content.
Top 61% by pitch volume (Rank #30478 of 50,000)Data updated Feb 10, 2026

Key Facts

Publishes
N/A
Episodes
191
Founded
N/A
Category
Business
Number of listeners
Private
Hidden on public pages

Listen to this Podcast

Pitch this podcast
Get the guest pitch kit.
Book a quick demo to unlock the outreach details you actually need before you hit send.
  • Verified contact + outreach fields
  • Exact listener estimates (not just bands)
  • Reply rate + response timing signals
10 minutes. Friendly walkthrough. No pressure.
Book a demo
Public snapshot
Audience: Under 4K / month
Canonical: https://podpitch.com/podcasts/the-content-strategy-experts-scriptorium
Reply rate: Under 2%

Latest Episodes

Back to top

From black box to business tool: Making AI transparent and accountable

Mon Jan 26 2026

Listen

As AI adoption accelerates, accountability and transparency issues are accumulating quickly. What should organizations be looking for, and what tools keep AI transparent? In this episode, Sarah O’Keefe sits down with Nathan Gilmour, the Chief Technical Officer of Writemore AI, to discuss a new approach to AI and accountability. Sarah O’Keefe: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do. Nathan Gilmour: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. We want to bring clarity to these black boxes and make them transparent, because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it. Related links: Sarah O’Keefe: AI and content: Avoiding disaster Sarah O’Keefe: AI and accountability Writemore AI LinkedIn: Nathan Gilmour Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey everyone. I’m Sarah O’Keefe. Welcome to another episode. I am here today with Nathan Gilmour, who’s the Chief Technical Officer of Writemore AI. Nathan, welcome. Nathan Gilmour: Thanks, Sarah. Happy to be here. SO: Welcome aboard. So tell us a little bit about what you’re doing over there. You’ve got a new company and a new product that’s, what, a year old? NG: Give or take, yep. SO: Yep. So what are you up to over there? Is it AI-related? NG: It is actually AI-related, but not AI-related in the traditional sense. Right now, we’ve built a product or tool that helps technical authoring teams convert from traditional Word or PDF formats, which would make up the bulk of much of the technical documentation ecosystem and help convert it to structured authoring. Meaning that they can get all of the benefits of reuse, easier publishing, high compatibility with various content management systems. And can do it in minutes where traditional conversions could take hours. So it really helps authoring teams get their content out to the world at large in a much more efficient and regulated fashion. SO: So I pick up a corpus of 10 or 20 or 50,000 pages of stuff, and you’re going to take that, and you’re going to shove it into a magic black box, and out comes, you said, structured content, DITA? NG: Correct. SO: Out comes DITA. Okay. What does this actually … Give us the … That’s the 30,000-foot view. So what’s the parachute level view? NG: Perfect. Underneath the hood, it’s actually a very deterministic pipeline. Deterministic pipeline means that there is a lot more code supporting it. It’s not an AI inferring what it should do. There’s actual code that guides a conversion process first. So going from, let’s say, Word to DITA, there are tools within the DITA Open Toolkit that allow and facilitate that much more mechanically, rather than trusting an AI to do it. We know that AI does struggle with structure, especially as context windows expand. It becomes more and more inaccurate. So if we feed these models with far more mechanically created content, they become much more accurate. You’re not trusting them to do much more, more of the nuanced parts of the process. So there’s a big difference between determinism and probabilism. Where determinism is the mechanical conversion of something, probabilism is allowing the AI to infer a process. So that’s where we differ is our process is much more deterministic versus allowing the AI to do everything on its own. SO: So is it fair to say that you combined the … And for deterministic, I’m going to say scripting. But is it fair to say that you combined the DITA OT scripting processing with additional AI around that to improve the results? NG: Correct. It also expedites the results so that instead of having a human do much of the semantic understanding of the document, we allow the AI to do it in a far more focused task. Machines can read faster. SO: Okay. And so for most of us, when we start talking about AI, most people think large language model and specifically ChatGPT, but that’s not what this is. This is not like a front-end go play with it as a consumer. This is a tool for authors. NG: Correct. And even further to that, it’s a partner tool for authors. It allows them to continue authoring in a format that they’re familiar with. Well, let’s take Microsoft Word, for example. Sometimes the shift from Word to structured authoring could be considered an enormous upheaval. Allowing authors to continue authoring in a format that they’re good at and they’re familiar with, and then have a partner tool that allows them to expedite the conversion process to structured authoring so that they can maintain a single source of truth, makes things a little bit better, more manageable, and more reliable in the long run. So instead of having to effectively cause a riot with the authoring teams, we can empower them to continue doing what they’re good at. SO: Okay. So we drop the Word file in and magically DITA comes out. What if it’s not quite right? What if our AI doesn’t get it exactly right? I mean, how do I know that it’s not producing something that looks good, but is actually wrong? NG: Great question. And that’s where, prior to doing anything further, there is a review period for the human authors. So in the event that the AI does make a mistake, it is not only completely transparent, so the output, the payload, as we describe it, comes with a full audit report. So every determination that the AI makes is traced and tracked and explained. And then further to that, the humans are even able to take that payload out anyway, open it up in an XML editor. So at this point in time, the content is converted, it is ready to go into the CCMS. Prior to doing that, it can go into a subject matter expert who is familiar with structured authoring to do a final validation of the content to make sure that it is accurate. The biggest differentiator, though, is the tool never creates content. The humans need to create content because they are the subject matter experts within their field. They create the first draft. The tool takes it, converts it, but doesn’t change anything. It only works with the material as it stands. And then once that is complete, it goes back into another human-centered review so that there are audit trails, it is traceable. And there is a final touchpoint by a human prior to the final migration into their content management system. SO: So you’re saying that basically you can diff this. I mean, you can look at the before and the after and see where all the changes are coming in. NG: Correct. SO: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do. NG: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. Where we want to come in is to bring clarity to these black boxes. Make them transparent, I guess you can say. Because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it. One of the added benefits that we have baked into the tool from a backend perspective is its ability to be completely internet-unaware. Meaning if an organization has the capital and the infrastructure to host a model, this can be plugged directly into their existing AI infrastructure and use its brain. Which, realistically, is what the language model is. It’s just a brain. So if companies have invested the time, invested the capital in order to build out this infrastructure, the Writemore tool can plug right into it and follow those preexisting information security policies. Without having to worry about something going out to the worldwide web. SO: So the implication is that I can put this inside my very large organization with very strict information security policies and not be suddenly feeding my entire intellectual property corpus to a public-facing AI. NG: That is entirely correct. SO: We are not doing that. Okay. So I want to step back a tiny bit and think about what it means, because it seems like the thing that we’re circling around is accountability, right? What does it mean to use AI and still have accountability? And so, based on your experience of what you’ve been working on and building, what are some of the things that you’ve uncovered in terms of what we should be looking for generally as we’re building out AI-based things? What should we be looking for in terms of accountability of AI? NG: The major accountability of AI is what could it look like if a business model changes? Let’s kind of focus on the large players in the market right now. There

More

As AI adoption accelerates, accountability and transparency issues are accumulating quickly. What should organizations be looking for, and what tools keep AI transparent? In this episode, Sarah O’Keefe sits down with Nathan Gilmour, the Chief Technical Officer of Writemore AI, to discuss a new approach to AI and accountability. Sarah O’Keefe: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do. Nathan Gilmour: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. We want to bring clarity to these black boxes and make them transparent, because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it. Related links: Sarah O’Keefe: AI and content: Avoiding disaster Sarah O’Keefe: AI and accountability Writemore AI LinkedIn: Nathan Gilmour Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey everyone. I’m Sarah O’Keefe. Welcome to another episode. I am here today with Nathan Gilmour, who’s the Chief Technical Officer of Writemore AI. Nathan, welcome. Nathan Gilmour: Thanks, Sarah. Happy to be here. SO: Welcome aboard. So tell us a little bit about what you’re doing over there. You’ve got a new company and a new product that’s, what, a year old? NG: Give or take, yep. SO: Yep. So what are you up to over there? Is it AI-related? NG: It is actually AI-related, but not AI-related in the traditional sense. Right now, we’ve built a product or tool that helps technical authoring teams convert from traditional Word or PDF formats, which would make up the bulk of much of the technical documentation ecosystem and help convert it to structured authoring. Meaning that they can get all of the benefits of reuse, easier publishing, high compatibility with various content management systems. And can do it in minutes where traditional conversions could take hours. So it really helps authoring teams get their content out to the world at large in a much more efficient and regulated fashion. SO: So I pick up a corpus of 10 or 20 or 50,000 pages of stuff, and you’re going to take that, and you’re going to shove it into a magic black box, and out comes, you said, structured content, DITA? NG: Correct. SO: Out comes DITA. Okay. What does this actually … Give us the … That’s the 30,000-foot view. So what’s the parachute level view? NG: Perfect. Underneath the hood, it’s actually a very deterministic pipeline. Deterministic pipeline means that there is a lot more code supporting it. It’s not an AI inferring what it should do. There’s actual code that guides a conversion process first. So going from, let’s say, Word to DITA, there are tools within the DITA Open Toolkit that allow and facilitate that much more mechanically, rather than trusting an AI to do it. We know that AI does struggle with structure, especially as context windows expand. It becomes more and more inaccurate. So if we feed these models with far more mechanically created content, they become much more accurate. You’re not trusting them to do much more, more of the nuanced parts of the process. So there’s a big difference between determinism and probabilism. Where determinism is the mechanical conversion of something, probabilism is allowing the AI to infer a process. So that’s where we differ is our process is much more deterministic versus allowing the AI to do everything on its own. SO: So is it fair to say that you combined the … And for deterministic, I’m going to say scripting. But is it fair to say that you combined the DITA OT scripting processing with additional AI around that to improve the results? NG: Correct. It also expedites the results so that instead of having a human do much of the semantic understanding of the document, we allow the AI to do it in a far more focused task. Machines can read faster. SO: Okay. And so for most of us, when we start talking about AI, most people think large language model and specifically ChatGPT, but that’s not what this is. This is not like a front-end go play with it as a consumer. This is a tool for authors. NG: Correct. And even further to that, it’s a partner tool for authors. It allows them to continue authoring in a format that they’re familiar with. Well, let’s take Microsoft Word, for example. Sometimes the shift from Word to structured authoring could be considered an enormous upheaval. Allowing authors to continue authoring in a format that they’re good at and they’re familiar with, and then have a partner tool that allows them to expedite the conversion process to structured authoring so that they can maintain a single source of truth, makes things a little bit better, more manageable, and more reliable in the long run. So instead of having to effectively cause a riot with the authoring teams, we can empower them to continue doing what they’re good at. SO: Okay. So we drop the Word file in and magically DITA comes out. What if it’s not quite right? What if our AI doesn’t get it exactly right? I mean, how do I know that it’s not producing something that looks good, but is actually wrong? NG: Great question. And that’s where, prior to doing anything further, there is a review period for the human authors. So in the event that the AI does make a mistake, it is not only completely transparent, so the output, the payload, as we describe it, comes with a full audit report. So every determination that the AI makes is traced and tracked and explained. And then further to that, the humans are even able to take that payload out anyway, open it up in an XML editor. So at this point in time, the content is converted, it is ready to go into the CCMS. Prior to doing that, it can go into a subject matter expert who is familiar with structured authoring to do a final validation of the content to make sure that it is accurate. The biggest differentiator, though, is the tool never creates content. The humans need to create content because they are the subject matter experts within their field. They create the first draft. The tool takes it, converts it, but doesn’t change anything. It only works with the material as it stands. And then once that is complete, it goes back into another human-centered review so that there are audit trails, it is traceable. And there is a final touchpoint by a human prior to the final migration into their content management system. SO: So you’re saying that basically you can diff this. I mean, you can look at the before and the after and see where all the changes are coming in. NG: Correct. SO: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do. NG: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. Where we want to come in is to bring clarity to these black boxes. Make them transparent, I guess you can say. Because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it. One of the added benefits that we have baked into the tool from a backend perspective is its ability to be completely internet-unaware. Meaning if an organization has the capital and the infrastructure to host a model, this can be plugged directly into their existing AI infrastructure and use its brain. Which, realistically, is what the language model is. It’s just a brain. So if companies have invested the time, invested the capital in order to build out this infrastructure, the Writemore tool can plug right into it and follow those preexisting information security policies. Without having to worry about something going out to the worldwide web. SO: So the implication is that I can put this inside my very large organization with very strict information security policies and not be suddenly feeding my entire intellectual property corpus to a public-facing AI. NG: That is entirely correct. SO: We are not doing that. Okay. So I want to step back a tiny bit and think about what it means, because it seems like the thing that we’re circling around is accountability, right? What does it mean to use AI and still have accountability? And so, based on your experience of what you’ve been working on and building, what are some of the things that you’ve uncovered in terms of what we should be looking for generally as we’re building out AI-based things? What should we be looking for in terms of accountability of AI? NG: The major accountability of AI is what could it look like if a business model changes? Let’s kind of focus on the large players in the market right now. There

Key Metrics

Back to top
Pitches sent
10
From PodPitch users
Rank
#30478
Top 61% by pitch volume (Rank #30478 of 50,000)
Average rating
4.3
Ratings count may be unavailable
Reviews
1
Written reviews (when available)
Publish cadence
N/A
Episode count
191
Data updated
Feb 10, 2026
Social followers
2.7K

Public Snapshot

Back to top
Country
United States
Language
EN-US
Language (ISO)
Release cadence
N/A
Latest episode date
Mon Jan 26 2026

Audience & Outreach (Public)

Back to top
Audience range
Under 4K / month
Public band
Reply rate band
Under 2%
Public band
Response time band
Private
Hidden on public pages
Replies received
Private
Hidden on public pages

Public ranges are rounded for privacy. Unlock the full report for exact values.

Presence & Signals

Back to top
Social followers
2.7K
Contact available
Yes
Masked on public pages
Sponsors detected
No
Guest format
No

Social links

No public profiles listed.

Demo to Unlock Full Outreach Intelligence

We publicly share enough context for discovery. For actionable outreach data, unlock the private blocks below.

Audience & Growth
Demo to unlock
Monthly listeners49,360
Reply rate18.2%
Avg response4.1 days
See audience size and growth. Demo to unlock.
Contact preview
i***@hidden
Get verified host contact details. Demo to unlock.
Sponsor signals
Demo to unlock
Sponsor mentionsLikely
Ad-read historyAvailable
View sponsorship signals and ad read history. Demo to unlock.
Book a demo

How To Pitch The Content Strategy Experts - Scriptorium

Back to top

Want to get booked on podcasts like this?

Become the guest your future customers already trust.

PodPitch helps you find shows, draft personalized pitches, and hit send faster. We share enough public context for discovery; for actionable outreach data, unlock the private blocks.

  • Identify shows that match your audience and offer.
  • Write pitches in your voice (nothing sends without you).
  • Move from “maybe later” to booked interviews faster.
  • Unlock deeper outreach intelligence with a quick demo.

This show is Rank #30478 by pitch volume, with 10 pitches sent by PodPitch users.

Book a demoBrowse more shows10 minutes. Friendly walkthrough. No pressure.
4.3 / 5
RatingsN/A
Written reviews1

We summarize public review counts here; full review text aggregation is not shown on PodPitch yet.

Frequently Asked Questions About The Content Strategy Experts - Scriptorium

Back to top

What is The Content Strategy Experts - Scriptorium about?

The content strategy experts at Scriptorium discuss how to manage, structure, organize, and distribute content.

How often does The Content Strategy Experts - Scriptorium publish new episodes?

The Content Strategy Experts - Scriptorium publishes on a variable schedule.

How many listeners does The Content Strategy Experts - Scriptorium get?

PodPitch shows a public audience band (like "Under 4K / month"). Book a demo to unlock exact audience estimates and how we calculate them.

How can I pitch The Content Strategy Experts - Scriptorium?

Use PodPitch to access verified outreach details and pitch recommendations for The Content Strategy Experts - Scriptorium. Start at https://podpitch.com/try/1.

Which podcasts are similar to The Content Strategy Experts - Scriptorium?

This page includes internal links to similar podcasts. You can also browse the full directory at https://podpitch.com/podcasts.

How do I contact The Content Strategy Experts - Scriptorium?

Public pages only show a masked contact preview. Book a demo to unlock verified email and outreach fields.

Quick favor for your future self: want podcast bookings without the extra mental load? PodPitch helps you find shows, draft personalized pitches, and hit send faster.