The AI market is loud right now. Every vendor is promoting agents. Every roadmap includes autonomy. The public conversation, some of it very influential, almost inevitably drifts toward digital coworkers and self-directed workflows.
Inside complex, non-digital native enterprises, the picture is more measured.
Most of the data and analytics leaders we work with are in the middle of a serious modernization effort. A cloud-based data and AI platform is in place, at least in MVP form. Core ingestion pipelines are running. Shared data products are beginning to take shape. Governance programs exist, though they are still maturing. Business units are starting to rely on centralized data instead of building everything on their own.
It’s real progress. It’s also unfinished.
Data products are just getting off the ground. Quality standards are tightening. Ownership models are still being clarified between central teams and the business. Platform reliability continues to improve, but it still requires active management. Most organizations would describe their foundation as "working" — not yet industrialized. At the same time, expectations are rising.
Senior leadership—even boards—want to know “where’s the AI for that” and they want to see AI change operational performance. For many non-technical leaders, they expect to see immediate measurable improvements because of the technology: Are planning cycles faster? Do we have a clearer view of supplier risk? Is engineering moving any quicker? They are asking how the investments made in AI translate into tangible outcomes.
This is where we see many data and analytics leaders feeling the pressure.
They have built enough capability to support AI experimentation. They have not yet built the operating model maturity to scale autonomy without introducing risk.
That inherent tension — progress on the foundation, pressure on outcomes — defines this moment for a lot of enterprises right now.
AI Readiness Assessment
The IIA AI Readiness Assessment (AIRA) is a competency-based assessment of organizational readiness for adopting and deploying deep learning and generative AI applications. Designed to augment IIA’s Analytics Maturity Assessment (AMA), the AIRA focuses on the organizational, technical, and governance conditions required to deploy AI safely, effectively, and at scale.
You Built the Foundation. Now, Leverage Business Constraints.
Getting an enterprise data and AI MVP into production at a large, legacy organization is difficult work. It requires funding battles, architectural tradeoffs, governance debates, and real change management across business units.
If teams are actively consuming shared data products, that didn’t happen by accident. It took years of alignment, hiring, standard-setting, and incremental credibility-building. Most CDAOs had to earn trust before they earned adoption.
But even when the foundation is real, it does not automatically translate into AI impact.
Expanding ingestion, storage, processing, and security strengthens the platform. Maturing governance improves confidence in the data. Launching data products reduces duplication and local workarounds.
All of that matters. But it doesn’t change operating performance on its own.
We see this pattern across clients. Platform capability expands. Roadmaps grow. New services get added. At the same time, business stakeholders still ask a simple question: what is different in how we operate?
The assumption is understandable: once the platform is mature enough, AI — especially agent-based AI — should layer in naturally. In practice, it doesn’t.
Data products enable access and consistency. AI outcomes require redesigning pieces of work. Those efforts sit with different leaders, follow different funding paths, and carry different risk profiles.
If you want to move forward, start by isolating two or three business processes where performance is constrained. Define the baseline economics. Where does time accumulate? Where does rework happen? Where do handoffs create friction?
Then evaluate where AI fits into that specific workflow.
Until AI attaches to a defined business constraint, platform expansion feels like progress but doesn’t register as performance improvement. The move after the MVP is not simply more capability. It is connecting the platform to how work gets done.
Infrastructure and Reliability Are Still Moving Targets
The conversation around agentic AI often assumes that infrastructure and reliability are largely solved. In practice, most enterprises are still navigating both.
Across clients, we see platform teams working through real constraints. Compute budgets are scrutinized more closely than they were two years ago. Capacity planning conversations are more frequent. FinOps is no longer optional. Even large cloud providers continue to balance performance, cost, and availability as demand grows.
Why does this matter for agent-based systems? An unreliable agent consumes the same infrastructure as a reliable one. A workflow that loops or retries unnecessarily still burns compute. A model that produces inconsistent results still requires downstream remediation. The cost shows up in cloud spend, but it also shows up in operational friction.
For organizations whose data platforms are still maturing, that tradeoff becomes visible quickly.
Reliability at scale remains uneven across the market. We have seen enterprises adjust deployment plans after early production issues surfaced. In some cases, agents performed well in controlled environments but struggled when exposed to messy, real-world workflows. The cleanup effort often outweighed the initial gains.
That experience shapes how more disciplined teams are approaching 2026.
As platforms expand on AWS or any major cloud, performance and efficiency are being treated as gating criteria. Before production rollout, teams are defining benchmarks in sandbox environments. They are measuring cost per successful outcome rather than cost per API call. They are requiring clear failure reporting and defined rollback paths.
Enterprises are learning that autonomy layered onto an immature reliability model amplifies risk. The teams that scale successfully tend to do so after they have made infrastructure economics and performance standards explicit. It’s a mindset shift.
What Comes After Data Products
Across many organizations, data products are just beginning to gain traction. Business teams trust them more than they have in the past. Adoption is growing. Standards are improving. The platform feels more stable.
At that point, the question naturally becomes: what comes next?
What we see in practice is that the next step is rarely enterprise-wide agent deployment. Instead, teams start looking at specific areas where performance still lags in day-to-day operations.
An engineering review process that stretches longer than anyone is comfortable with. A supplier risk backlog that builds faster than it clears. A contract analysis queue that forces legal to prioritize volume over depth. These constraints already exist. They are visible. They carry cost.
The organizations making progress tend to begin there.
They quantify the baseline before introducing AI. How long does the process take? How many people touch it? Where do delays accumulate? Where does rework occur? That clarity shapes the AI design.
When AI enters the workflow, it enters with a target. Reduce review time. Improve first-pass accuracy. Shorten escalation cycles. The impact is measured against the baseline, not against model benchmarks.
This approach surfaces tradeoffs quickly. In some cases, an AI-assisted step reduces manual effort but increases compute spend. In others, it accelerates throughput but introduces new exception handling patterns. Seeing those tradeoffs early allows teams to adjust before scaling.
Hybrid patterns are common in these environments. Deterministic logic handles structured decisions. Language models support interpretation or drafting. Humans remain involved where judgment or accountability is required. Enterprises that attempted to automate entire workflows in one step often pulled back. The ones that progressed introduced AI into defined stages and expanded gradually.
Data products create consistency and access. The work that follows is about redesigning how decisions and tasks move through the organization. That is where AI begins to influence performance rather than simply capability.
Maturity Before Autonomy
Across large enterprises, agent initiatives fail because the organization hasn’t fully clarified how AI fits into existing decision structures.
In environments where governance programs are still maturing and operating models are still settling, ownership can remain ambiguous. When an automated recommendation leads to a questionable outcome, the conversation can shift quickly toward attribution. Was it a configuration issue? A data quality problem? A model behavior issue? Those discussions take time. They also affect confidence.
As organizations explore greater levels of autonomy, that ambiguity becomes more visible.
When an agent participates in supplier approval, compliance review, or contract drafting, decision rights need to be clearly defined. Who reviews edge cases? Who signs off on exceptions? How are failures logged and surfaced? The teams that scale successfully tend to answer those questions early, even if their first deployments remain tightly scoped.
Change management surfaces as a parallel theme.
Introducing AI into a workflow shifts how work moves through the organization. Review steps adjust. Escalation paths change. Teams begin interacting with recommendations instead of raw data. In enterprises where the platform itself is still stabilizing, that behavioral shift can outpace readiness if it is not handled deliberately.
What we consistently see is that maturity shows up in operational habits. Clear escalation paths. Defined override standards. Transparent reporting on failure and recovery. These are less visible than architecture diagrams, but they determine whether adoption holds.
Organizations that progress tend to expand autonomy in stages. They begin with AI-assisted workflows that retain human oversight. They measure outcomes in terms business leaders recognize — time saved, error reduction, risk visibility. They use that evidence to inform the next expansion.
Cloud Expansion Without Lock-In
As platforms mature, another question starts to surface in leadership conversations: how far do we lean into our cloud provider’s AI stack?
For most enterprises, the foundational cloud decision is settled. Elastic compute, managed storage, networking, and security services provide real leverage. Few organizations want to recreate those capabilities internally.
The nuance appears one layer up.
Cloud-native AI services make experimentation easier. Teams can stand up environments quickly. Managed tooling reduces setup overhead. Early pilots move faster when infrastructure and AI services are tightly integrated.
Over time, architecture decisions accumulate.
We see organizations discover that orchestration logic, prompt management, and workflow routing are deeply embedded in proprietary frameworks. Model selection becomes constrained by integration choices made during early pilots. Introducing a second provider or shifting model strategy requires more effort than expected.
The enterprises that navigate this well tend to separate concerns deliberately. Infrastructure services remain closely aligned with the cloud provider. Higher-level orchestration and business logic stay more portable. Model layers are evaluated with interchangeability in mind. Integration standards are defined before scaling beyond pilot scope.
This does not mean avoiding cloud-native AI services. It means using them with awareness that today’s acceleration decisions shape tomorrow’s flexibility.
As the agent ecosystem evolves, architectural optionality becomes a practical risk control. The organizations that preserve it retain more leverage as vendors, models, and standards continue to shift.
What This Means for 2026 AI Strategy
If your platform is live and business teams are using shared data products, you have already done meaningful work. In large enterprises, that progress represents years of investment, negotiation, and incremental trust-building.
The next phase is less about expanding capability and more about translating capability into operational change. That shift can be uncomfortable. Platform roadmaps are tangible. Workflow redesign and accountability models are more complex.
Across the organizations we work with, the ones making progress tend to make a few adjustments.
They stop using platform growth as the primary signal of AI momentum. They begin attaching AI initiatives to specific operational constraints. They measure cost per successful outcome instead of relying solely on technical metrics. They define performance thresholds before production deployment. They clarify decision rights as autonomy increases. They keep architectural flexibility in view as they scale.
The organizations that take this approach expand agent authority gradually, grounded in measured impact. The ones that move too quickly often find themselves revisiting earlier assumptions.
AI starts to function as a business capability when it changes how work moves through the organization and how resources are allocated. Until then, it remains potential.
The difference is not technology. It is operating maturity.
Making AI Work in the Enterprise
Our all-in-one guide to making AI work inside enterprise analytics, featuring IIA expert frameworks, real client inquiries, and practical guidance to help your team deploy AI confidently and deliver measurable value.