Why Do AI Initiatives Fail? Cydni Tetro Joins Jacob Andra to Discuss Common Breakdowns for Digital Transformation Projects
Tue Feb 03 2026
Send us a text
Enterprise AI projects fail at alarming rates. MIT research shows most organizations struggle to achieve meaningful ROI from their AI investments. In this episode of The Applied AI Podcast, host Jacob Andra sits down with Cydni Tetro to explore why enterprise AI transformation is fundamentally different from individual productivity gains, and what separates successful deployments from expensive failures.
Cydni brings rare depth to this conversation. Her career spans six years at Disney Imagineering commercializing innovation across business units, serving as CIO at one of the largest Coca-Cola bottlers managing 8,000 employees, and now leading digital transformation across a private equity portfolio. She also founded the Women's Tech Council, which has activated over 40,000 women in technology careers and generates $32 million in annual economic value to the state of Utah.
The conversation addresses a critical gap in how organizations think about AI. Most discussions focus on individual productivity. For example, using ChatGPT to draft emails faster or summarize documents. These gains are real but represent only the outer layers of what AI can accomplish. The deeper value requires tackling enterprise-wide challenges involving data integration, systems engineering, legacy infrastructure, and organizational change.
Cydni identifies three distinct categories of enterprise AI projects based on data complexity:
First, projects with centralized, structured data sources. She shares how her team deployed AI-powered cybersecurity tools in just 60 days because email and threat data already flowed into a single funnel. The data was accessible and structured, making implementation straightforward.
Second, legacy systems with legacy data. Manufacturing environments present particular challenges. Operational technology (OT) networks have historically been isolated from IT networks. These OT networks run plant equipment and were never designed to connect to the outside world. Adding AI requires new sensor arrays, network architecture changes, cybersecurity considerations, and workforce training. Some manufacturing lines are 20 to 30 years old, and organizations must maximize their lifetime value while somehow integrating modern AI capabilities.
Third, distributed datasets that must be organized before AI can deliver value. A procurement AI project Cydni evaluated would have required massive effort to create structured data from tens of thousands of contracts, serving a team of only two to three people. The ROI calculation did not justify the lift.
Common failure modes discussed in the episode:
* Targeting the wrong use case
* Tackling the right use case but with the wrong tool
* Precursor unreadiness (e.g., data not ready)
* Not accounting for all the adjacencies and multidirectional dependencies
* Tackling too much at once, causing delays in demonstrating value
* Scope creep from stakeholders adding requirements
* Distributed datasets that must be organized before AI can work
* ROI not justified given the effort required
* Teams overwhelmed by new responsibilities they were not trained for
* Lack of alignment on what minimum viable success looks like
* Inability to contain scope to demonstrate value
Host Jacob Andra is the CEO of Talbot West, an AI systems engineering company that helps enterprises avoid the common pitfalls of complex digital transformation initiatives.
More
Send us a text Enterprise AI projects fail at alarming rates. MIT research shows most organizations struggle to achieve meaningful ROI from their AI investments. In this episode of The Applied AI Podcast, host Jacob Andra sits down with Cydni Tetro to explore why enterprise AI transformation is fundamentally different from individual productivity gains, and what separates successful deployments from expensive failures. Cydni brings rare depth to this conversation. Her career spans six years at Disney Imagineering commercializing innovation across business units, serving as CIO at one of the largest Coca-Cola bottlers managing 8,000 employees, and now leading digital transformation across a private equity portfolio. She also founded the Women's Tech Council, which has activated over 40,000 women in technology careers and generates $32 million in annual economic value to the state of Utah. The conversation addresses a critical gap in how organizations think about AI. Most discussions focus on individual productivity. For example, using ChatGPT to draft emails faster or summarize documents. These gains are real but represent only the outer layers of what AI can accomplish. The deeper value requires tackling enterprise-wide challenges involving data integration, systems engineering, legacy infrastructure, and organizational change. Cydni identifies three distinct categories of enterprise AI projects based on data complexity: First, projects with centralized, structured data sources. She shares how her team deployed AI-powered cybersecurity tools in just 60 days because email and threat data already flowed into a single funnel. The data was accessible and structured, making implementation straightforward. Second, legacy systems with legacy data. Manufacturing environments present particular challenges. Operational technology (OT) networks have historically been isolated from IT networks. These OT networks run plant equipment and were never designed to connect to the outside world. Adding AI requires new sensor arrays, network architecture changes, cybersecurity considerations, and workforce training. Some manufacturing lines are 20 to 30 years old, and organizations must maximize their lifetime value while somehow integrating modern AI capabilities. Third, distributed datasets that must be organized before AI can deliver value. A procurement AI project Cydni evaluated would have required massive effort to create structured data from tens of thousands of contracts, serving a team of only two to three people. The ROI calculation did not justify the lift. Common failure modes discussed in the episode: * Targeting the wrong use case * Tackling the right use case but with the wrong tool * Precursor unreadiness (e.g., data not ready) * Not accounting for all the adjacencies and multidirectional dependencies * Tackling too much at once, causing delays in demonstrating value * Scope creep from stakeholders adding requirements * Distributed datasets that must be organized before AI can work * ROI not justified given the effort required * Teams overwhelmed by new responsibilities they were not trained for * Lack of alignment on what minimum viable success looks like * Inability to contain scope to demonstrate value Host Jacob Andra is the CEO of Talbot West, an AI systems engineering company that helps enterprises avoid the common pitfalls of complex digital transformation initiatives.