From black box to business tool: Making AI transparent and accountable
Mon Jan 26 2026
As AI adoption accelerates, accountability and transparency issues are accumulating quickly. What should organizations be looking for, and what tools keep AI transparent? In this episode, Sarah O’Keefe sits down with Nathan Gilmour, the Chief Technical Officer of Writemore AI, to discuss a new approach to AI and accountability.
Sarah O’Keefe: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do.
Nathan Gilmour: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. We want to bring clarity to these black boxes and make them transparent, because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it.
Related links:
Sarah O’Keefe: AI and content: Avoiding disaster
Sarah O’Keefe: AI and accountability
Writemore AI
LinkedIn:
Nathan Gilmour
Sarah O’Keefe
Transcript:
Introduction with ambient background music
Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations.
Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it.
Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change.
Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off.
End of introduction
Sarah O’Keefe: Hey everyone. I’m Sarah O’Keefe. Welcome to another episode. I am here today with Nathan Gilmour, who’s the Chief Technical Officer of Writemore AI. Nathan, welcome.
Nathan Gilmour: Thanks, Sarah. Happy to be here.
SO: Welcome aboard. So tell us a little bit about what you’re doing over there. You’ve got a new company and a new product that’s, what, a year old?
NG: Give or take, yep.
SO: Yep. So what are you up to over there? Is it AI-related?
NG: It is actually AI-related, but not AI-related in the traditional sense. Right now, we’ve built a product or tool that helps technical authoring teams convert from traditional Word or PDF formats, which would make up the bulk of much of the technical documentation ecosystem and help convert it to structured authoring. Meaning that they can get all of the benefits of reuse, easier publishing, high compatibility with various content management systems. And can do it in minutes where traditional conversions could take hours. So it really helps authoring teams get their content out to the world at large in a much more efficient and regulated fashion.
SO: So I pick up a corpus of 10 or 20 or 50,000 pages of stuff, and you’re going to take that, and you’re going to shove it into a magic black box, and out comes, you said, structured content, DITA?
NG: Correct.
SO: Out comes DITA. Okay. What does this actually … Give us the … That’s the 30,000-foot view. So what’s the parachute level view?
NG: Perfect. Underneath the hood, it’s actually a very deterministic pipeline. Deterministic pipeline means that there is a lot more code supporting it. It’s not an AI inferring what it should do. There’s actual code that guides a conversion process first. So going from, let’s say, Word to DITA, there are tools within the DITA Open Toolkit that allow and facilitate that much more mechanically, rather than trusting an AI to do it. We know that AI does struggle with structure, especially as context windows expand. It becomes more and more inaccurate. So if we feed these models with far more mechanically created content, they become much more accurate. You’re not trusting them to do much more, more of the nuanced parts of the process. So there’s a big difference between determinism and probabilism. Where determinism is the mechanical conversion of something, probabilism is allowing the AI to infer a process. So that’s where we differ is our process is much more deterministic versus allowing the AI to do everything on its own.
SO: So is it fair to say that you combined the … And for deterministic, I’m going to say scripting. But is it fair to say that you combined the DITA OT scripting processing with additional AI around that to improve the results?
NG: Correct. It also expedites the results so that instead of having a human do much of the semantic understanding of the document, we allow the AI to do it in a far more focused task. Machines can read faster.
SO: Okay. And so for most of us, when we start talking about AI, most people think large language model and specifically ChatGPT, but that’s not what this is. This is not like a front-end go play with it as a consumer. This is a tool for authors.
NG: Correct. And even further to that, it’s a partner tool for authors. It allows them to continue authoring in a format that they’re familiar with. Well, let’s take Microsoft Word, for example. Sometimes the shift from Word to structured authoring could be considered an enormous upheaval. Allowing authors to continue authoring in a format that they’re good at and they’re familiar with, and then have a partner tool that allows them to expedite the conversion process to structured authoring so that they can maintain a single source of truth, makes things a little bit better, more manageable, and more reliable in the long run. So instead of having to effectively cause a riot with the authoring teams, we can empower them to continue doing what they’re good at.
SO: Okay. So we drop the Word file in and magically DITA comes out. What if it’s not quite right? What if our AI doesn’t get it exactly right? I mean, how do I know that it’s not producing something that looks good, but is actually wrong?
NG: Great question. And that’s where, prior to doing anything further, there is a review period for the human authors. So in the event that the AI does make a mistake, it is not only completely transparent, so the output, the payload, as we describe it, comes with a full audit report. So every determination that the AI makes is traced and tracked and explained. And then further to that, the humans are even able to take that payload out anyway, open it up in an XML editor. So at this point in time, the content is converted, it is ready to go into the CCMS.
Prior to doing that, it can go into a subject matter expert who is familiar with structured authoring to do a final validation of the content to make sure that it is accurate. The biggest differentiator, though, is the tool never creates content. The humans need to create content because they are the subject matter experts within their field. They create the first draft. The tool takes it, converts it, but doesn’t change anything. It only works with the material as it stands. And then once that is complete, it goes back into another human-centered review so that there are audit trails, it is traceable. And there is a final touchpoint by a human prior to the final migration into their content management system.
SO: So you’re saying that basically you can diff this. I mean, you can look at the before and the after and see where all the changes are coming in.
NG: Correct.
SO: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do.
NG: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. Where we want to come in is to bring clarity to these black boxes. Make them transparent, I guess you can say. Because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it.
One of the added benefits that we have baked into the tool from a backend perspective is its ability to be completely internet-unaware. Meaning if an organization has the capital and the infrastructure to host a model, this can be plugged directly into their existing AI infrastructure and use its brain. Which, realistically, is what the language model is. It’s just a brain. So if companies have invested the time, invested the capital in order to build out this infrastructure, the Writemore tool can plug right into it and follow those preexisting information security policies. Without having to worry about something going out to the worldwide web.
SO: So the implication is that I can put this inside my very large organization with very strict information security policies and not be suddenly feeding my entire intellectual property corpus to a public-facing AI.
NG: That is entirely correct.
SO: We are not doing that. Okay. So I want to step back a tiny bit and think about what it means, because it seems like the thing that we’re circling around is accountability, right? What does it mean to use AI and still have accountability? And so, based on your experience of what you’ve been working on and building, what are some of the things that you’ve uncovered in terms of what we should be looking for generally as we’re building out AI-based things? What should we be looking for in terms of accountability of AI?
NG: The major accountability of AI is what could it look like if a business model changes? Let’s kind of focus on the large players in the market right now. There
More
As AI adoption accelerates, accountability and transparency issues are accumulating quickly. What should organizations be looking for, and what tools keep AI transparent? In this episode, Sarah O’Keefe sits down with Nathan Gilmour, the Chief Technical Officer of Writemore AI, to discuss a new approach to AI and accountability. Sarah O’Keefe: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do. Nathan Gilmour: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. We want to bring clarity to these black boxes and make them transparent, because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it. Related links: Sarah O’Keefe: AI and content: Avoiding disaster Sarah O’Keefe: AI and accountability Writemore AI LinkedIn: Nathan Gilmour Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey everyone. I’m Sarah O’Keefe. Welcome to another episode. I am here today with Nathan Gilmour, who’s the Chief Technical Officer of Writemore AI. Nathan, welcome. Nathan Gilmour: Thanks, Sarah. Happy to be here. SO: Welcome aboard. So tell us a little bit about what you’re doing over there. You’ve got a new company and a new product that’s, what, a year old? NG: Give or take, yep. SO: Yep. So what are you up to over there? Is it AI-related? NG: It is actually AI-related, but not AI-related in the traditional sense. Right now, we’ve built a product or tool that helps technical authoring teams convert from traditional Word or PDF formats, which would make up the bulk of much of the technical documentation ecosystem and help convert it to structured authoring. Meaning that they can get all of the benefits of reuse, easier publishing, high compatibility with various content management systems. And can do it in minutes where traditional conversions could take hours. So it really helps authoring teams get their content out to the world at large in a much more efficient and regulated fashion. SO: So I pick up a corpus of 10 or 20 or 50,000 pages of stuff, and you’re going to take that, and you’re going to shove it into a magic black box, and out comes, you said, structured content, DITA? NG: Correct. SO: Out comes DITA. Okay. What does this actually … Give us the … That’s the 30,000-foot view. So what’s the parachute level view? NG: Perfect. Underneath the hood, it’s actually a very deterministic pipeline. Deterministic pipeline means that there is a lot more code supporting it. It’s not an AI inferring what it should do. There’s actual code that guides a conversion process first. So going from, let’s say, Word to DITA, there are tools within the DITA Open Toolkit that allow and facilitate that much more mechanically, rather than trusting an AI to do it. We know that AI does struggle with structure, especially as context windows expand. It becomes more and more inaccurate. So if we feed these models with far more mechanically created content, they become much more accurate. You’re not trusting them to do much more, more of the nuanced parts of the process. So there’s a big difference between determinism and probabilism. Where determinism is the mechanical conversion of something, probabilism is allowing the AI to infer a process. So that’s where we differ is our process is much more deterministic versus allowing the AI to do everything on its own. SO: So is it fair to say that you combined the … And for deterministic, I’m going to say scripting. But is it fair to say that you combined the DITA OT scripting processing with additional AI around that to improve the results? NG: Correct. It also expedites the results so that instead of having a human do much of the semantic understanding of the document, we allow the AI to do it in a far more focused task. Machines can read faster. SO: Okay. And so for most of us, when we start talking about AI, most people think large language model and specifically ChatGPT, but that’s not what this is. This is not like a front-end go play with it as a consumer. This is a tool for authors. NG: Correct. And even further to that, it’s a partner tool for authors. It allows them to continue authoring in a format that they’re familiar with. Well, let’s take Microsoft Word, for example. Sometimes the shift from Word to structured authoring could be considered an enormous upheaval. Allowing authors to continue authoring in a format that they’re good at and they’re familiar with, and then have a partner tool that allows them to expedite the conversion process to structured authoring so that they can maintain a single source of truth, makes things a little bit better, more manageable, and more reliable in the long run. So instead of having to effectively cause a riot with the authoring teams, we can empower them to continue doing what they’re good at. SO: Okay. So we drop the Word file in and magically DITA comes out. What if it’s not quite right? What if our AI doesn’t get it exactly right? I mean, how do I know that it’s not producing something that looks good, but is actually wrong? NG: Great question. And that’s where, prior to doing anything further, there is a review period for the human authors. So in the event that the AI does make a mistake, it is not only completely transparent, so the output, the payload, as we describe it, comes with a full audit report. So every determination that the AI makes is traced and tracked and explained. And then further to that, the humans are even able to take that payload out anyway, open it up in an XML editor. So at this point in time, the content is converted, it is ready to go into the CCMS. Prior to doing that, it can go into a subject matter expert who is familiar with structured authoring to do a final validation of the content to make sure that it is accurate. The biggest differentiator, though, is the tool never creates content. The humans need to create content because they are the subject matter experts within their field. They create the first draft. The tool takes it, converts it, but doesn’t change anything. It only works with the material as it stands. And then once that is complete, it goes back into another human-centered review so that there are audit trails, it is traceable. And there is a final touchpoint by a human prior to the final migration into their content management system. SO: So you’re saying that basically you can diff this. I mean, you can look at the before and the after and see where all the changes are coming in. NG: Correct. SO: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do. NG: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. Where we want to come in is to bring clarity to these black boxes. Make them transparent, I guess you can say. Because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it. One of the added benefits that we have baked into the tool from a backend perspective is its ability to be completely internet-unaware. Meaning if an organization has the capital and the infrastructure to host a model, this can be plugged directly into their existing AI infrastructure and use its brain. Which, realistically, is what the language model is. It’s just a brain. So if companies have invested the time, invested the capital in order to build out this infrastructure, the Writemore tool can plug right into it and follow those preexisting information security policies. Without having to worry about something going out to the worldwide web. SO: So the implication is that I can put this inside my very large organization with very strict information security policies and not be suddenly feeding my entire intellectual property corpus to a public-facing AI. NG: That is entirely correct. SO: We are not doing that. Okay. So I want to step back a tiny bit and think about what it means, because it seems like the thing that we’re circling around is accountability, right? What does it mean to use AI and still have accountability? And so, based on your experience of what you’ve been working on and building, what are some of the things that you’ve uncovered in terms of what we should be looking for generally as we’re building out AI-based things? What should we be looking for in terms of accountability of AI? NG: The major accountability of AI is what could it look like if a business model changes? Let’s kind of focus on the large players in the market right now. There