Skip to content

AI Is Easy to Start and Hard to Finish

Over the past year, almost every conversation I have with analytics and AI leaders starts the same way. There is pressure to “do something with AI.” Boards are asking questions. Business partners are experimenting on their own. Vendors are promising speed, scale, and transformation.

And yet, underneath all of that momentum, the questions IIA hears from clients are remarkably consistent.

  • How do we decide which AI ideas are worth pursuing?
  • Why do our pilots never fully transition into production?
  • How do we prove value when the models technically work but the business impact is unclear?
  • How do we keep these systems stable once they are live?

These are not beginner questions. They are the questions of organizations that have already invested in data, analytics, and platforms, but are struggling to translate AI activity into measurable business results.

That is why Adam McElhinney’s AI framework resonated so strongly with our community at Symposium. His work does not introduce a new technology or a novel algorithm. It gives language and structure to the exact problems our clients are already wrestling with.

And from where I sit, that validation matters.

McElhinney’s framework confirms something we have seen repeatedly over the last decade. AI failures rarely stem from modeling sophistication. They stem from weak alignment, missing fundamentals, and a lack of operational discipline.

Adam McElhinney's AI Framework: Intake, Deployment, Integration

This framework gives analytics and AI leaders a clear, practical way to move from AI ambition to execution. Rather than focusing on models or tools, it centers on the real work enterprises struggle with most: choosing the right problems, avoiding common failure points, and building solutions that can actually be sustained in production.

The Problems Clients Bring to IIA Are Not About AI Hype

When clients reach out to IIA for help with AI, they are rarely asking which model to use. They are asking how to regain control.

They are dealing with fragmented initiatives that cannot be prioritized because no one can articulate how they connect to enterprise goals. They have proofs of concept that impress technical teams but fail to earn sustained business sponsorship. They have leaders who expect AI to behave like traditional software and are surprised by the ongoing maintenance burden.

McElhinney’s first failure point captures this perfectly. A lack of shared business KPIs is not an AI problem. It is a strategy problem that AI simply exposes faster.

We see this every day. Without a small, agreed set of enterprise KPIs, AI initiatives scatter across teams. Each one sounds reasonable in isolation. None of them compound into measurable value. Leaders then conclude that AI itself is the issue, when the real issue is that there was never a common target to aim at.

This is often where our work begins. Helping organizations clarify goals, quantify impact directionally, and establish a baseline so AI initiatives can be evaluated on something more concrete than enthusiasm.

This is also where independent analytics and AI assessments become an accelerator rather than a box-checking exercise. They surface misalignment early, before more money is spent on pilots that never scale. 

AI Readiness Assessment

The IIA AI Readiness Assessment (AIRA) is a competency-based assessment of organizational readiness for adopting and deploying deep learning and generative AI applications. Designed to augment IIA’s Analytics Maturity Assessment (AMA), the AIRA focuses on the organizational, technical, and governance conditions required to deploy AI safely, effectively, and at scale.

Production Is Where AI Efforts Go Quiet

One of the most sobering statistics McElhinney referenced is how many AI initiatives never make it out of pilot. We’ve all seen the numbers, 70%, 80%, 90% fail rates. The exact percentage matters less than the pattern behind it.

Teams prototype without a plan for production. Data governance, cloud architecture, security review, deployment patterns, and monitoring are treated as downstream concerns. When the model shows promise, the organization realizes the foundation is not there to support it.

At IIA, we often describe this as the illusion of progress. Activity is high. Value creation is low.

McElhinney’s framing makes the root cause unambiguous. Productionalization is not a phase. It is a design constraint that must be addressed from the start. Without it, even strong technical work has nowhere to take hold.

This is why our advisory conversations so often pivot away from models and toward operating patterns. How do you deploy consistently? How do you validate changes? How do you monitor business impact after launch?

These are not glamorous topics, but they are the difference between AI that survives contact with the enterprise and AI that quietly disappears.

For leaders looking for peer-tested guidance on these questions, this is where our Expert Network create leverage. Clients do not need another opinion. They need to know how others have solved this inside similar constraints. 

Advisory Services

Guiding your analytics journey: jumpstart your road to analytics advancement with high-touch, subscription-based, concierge services to guide and optimize YOUR analytics outcomes.

Validation Is a Leadership Tool, Not a Data Science Artifact

One of the most practical aspects of McElhinney’s framework is the emphasis on representative validation datasets. This is also one of the areas where executives can have the greatest impact.

Validation is often treated as a technical detail, something the data science team handles. In reality, it is how leaders arbitrate tradeoffs, manage expectations, and keep AI efforts grounded in reality.

McElhinney’s example of executives attempting to “stump the model” is one we hear often. Without a shared validation baseline, every edge case becomes a crisis. With one, leaders can distinguish between meaningful degradation and noise.

This is a subtle but powerful shift. Validation becomes a governance mechanism. It allows organizations to move faster because debates are settled empirically, not politically.

We increasingly see this pattern emerge in more mature AI programs. Validation datasets are treated as living assets. Monitoring ties back to business KPIs. Enhancements are framed as versioned investments rather than endless tweaks.

For organizations trying to build trust in AI internally, this is one of the highest leverage moves they can make. 

Choosing Fewer Problems Is a Strategic Advantage

Another theme in McElhinney’s framework that mirrors our client experience is the danger of doing too much at once.

Running ten or twelve AI initiatives in parallel feels ambitious. In practice, it dilutes focus, strains shared infrastructure, and overwhelms the teams responsible for support and monitoring.

The strongest programs we see pick one to three initiatives, define success criteria upfront, and commit to learning quickly. Prototypes are time-boxed. Decisions to continue or stop are explicit. Capacity for post-launch support is reserved before development begins.

That kind of discipline gives innovation a chance to compound rather than collapse under its own weight.

AI systems are not fire-and-forget. Leaders who fail to plan for the ongoing care of these systems erode trust when performance drifts and no one is accountable.

A Framework That Reflects Enterprise Reality

McElhinney’s framework resonates because it mirrors what enterprise leaders are already experiencing firsthand. The obstacles it outlines are not edge cases; they are very much mainstream. They show up consistently in client conversations and post-mortems on AI initiatives that did not deliver what was promised.  

For IIA, this reinforces the role we play alongside our clients. Our work is not about promoting AI for its own sake. It is about helping organizations make it work within their real-world constraints, align it to outcomes that matter, and build the operating discipline required to sustain progress.

If you are feeling pressure to accelerate AI while simultaneously questioning whether your foundation can support it, you are not behind. You are seeing the problem clearly.

And clarity, paired with the right guidance, is where lasting progress begins. 

Making AI Work in the Enterprise

Our all-in-one guide to making AI work inside enterprise analytics, featuring IIA expert frameworks, real client inquiries, and practical guidance to help your team deploy AI confidently and deliver measurable value.