Over the past few weeks, we’ve looked at AI readiness from three connected angles. The first article focused on the supply side: the quality, integration, metadata, and architectural conditions required to move AI beyond isolated pilots. The second shifted to the demand side and asked whether data and AI efforts are tied to the business decisions that matter most. The third examined the operating model itself and raised harder questions about structure, habits, and governance arrangements inside the organization that may be slowing progress more than enabling it. Taken together, those three pieces point to a broader issue that becomes difficult to ignore once AI pressure increases. Readiness is not just a question of technology, business demand, or organizational design in isolation. It is a question of maturity across all three.
Analysis of over 1,500 IIA advisory conversations from 2024-present shows a sharp rise in enterprise attention to agentic AI, governance, and production-oriented deployment questions. At the same time, IIA’s Analytics Maturity Assessment findings from 2018–2024 show that many data and analytics leaders remain heavily concentrated on more foundational priorities. In the 2025 summary view of that assessment data, the top six focus areas were data capture, data quality, data trustworthiness, analytical tools, data consistency, and data integration. Six of the top ten were directly tied to data foundation, while staffing level ranked seventh, followed by prioritization, iterative approach, and business skills. That pattern tells us something important. Many organizations are not moving into AI from a fully mature analytics base. They are pursuing more advanced AI ambitions while still working through unresolved foundational and organizational issues.
Recent advisory analysis also needs to be interpreted carefully. It shows where enterprise attention is moving, which is valuable. Inquiries can reveal what leaders are worried about, where urgency is building, and what kinds of guidance are in greatest demand. But assessment findings offer a different kind of insight. IIA’s Analytics and AI maturity diagnostics are designed to show where strengths may be masking weaknesses, where uneven capabilities create structural risk, and where visible progress in one dimension obscures gaps in another. The 30-Day Enterprise Analytics and AI Baseline Assessment, for example, is built to quantify maturity across critical dimensions, surface structural bottlenecks, and reveal hidden failure points through comparative scoring, heatmaps, and variance analysis. In other words, advisory inquiries tell us what organizations are pursuing, while assessment diagnostics help show what may still be holding them back.
This contrast is especially important in the current market. Inquiry patterns often reflect what is most urgent or most visible. Diagnostics are more likely to surface the slower-moving internal constraints leaders may have normalized over time. Because IIA works with large, complex enterprises across industries, our client base offers a strong view into the challenges and opportunities many non-digital-native companies are facing in 2026. The patterns we see in both our assessment data and advisory analysis are therefore useful not only for understanding our own client base, but also for interpreting the broader enterprise landscape. With that perspective, there are four takeaways that stand out.
[Webinar] Data Strategy or Bust: Thriving in the Age of AI
AI is raising the stakes for every enterprise. A strong data strategy turns AI from a planning exercise into an execution discipline. In this session, we’ll examine the strategic choices leaders must make to turn AI ambition into compounding business value.
Takeaway #1: Foundational data work remains the center of gravity for many organizations
The Analytics Maturity Assessment findings presented above suggest that many enterprises are still focused on stabilizing the environment underneath analytics and AI. The more advanced conversation may now center on AI governance, scaling, and agentic deployment, but the maturity profile underneath often tells a different story. Organizations continue to devote more attention to core data conditions than to many of the organizational capabilities required to capitalize on them. This points to a practical constraint on progress. Enterprises may be moving quickly at the surface while still working through basic issues in the layer below. The significance becomes clearer when viewed through IIA’s diagnostic approach. IIA’s assessments are built to quantify maturity across critical dimensions, identify uneven strategies, and reveal hidden failure points and structural weaknesses that leaders may not see clearly from within the organization. In practice, this means an enterprise can show visible progress in tooling, experimentation, or local AI success while still carrying serious weaknesses in governance, data foundations, cultural readiness, or operating discipline. For many large, complex, non-digital-native organizations, the challenge is not a total absence of capability, but an uneven maturity profile in which strengths create a false sense of readiness. Foundational work remains significant for that reason. It is often not the most visible sign of progress, but it is still the layer most likely to determine whether progress can hold.
Takeaway #2: AI ambition is rising faster than diagnosed readiness
IIA’s advisory trend lines clearly show that enterprise attention is moving toward more advanced AI deployment. In recent analysis covering more than 1,500 enterprise advisories and 218 agentic AI standardization issues, interest in agentic AI and governance has increased substantially, while only 18 of 355 formal AI guidance requests, or 5.1 percent, focused on talent and change management. That pattern tells us something meaningful about where leaders are directing their attention. They are increasingly asking how to deploy AI in more advanced, governed, and scalable ways. But when those inquiry patterns are read alongside IIA’s assessment findings, the picture becomes more complicated. The diagnostic view suggests that many organizations still carry unresolved weaknesses in data foundation, operating discipline, and organizational capability that may not be obvious from the inquiry stream alone. In other words, enterprises may be asking next-stage questions before they have fully addressed earlier-stage constraints. That does not mean the ambition is misplaced. It means the market conversation can overstate readiness when it is based only on what leaders are pursuing, rather than on a fuller view of the structural conditions underneath. For large, complex enterprises, this is often where risk accumulates: visible momentum in AI can create the impression of maturity, while the underlying environment remains uneven and not yet strong enough to support scaled, sustained value.
Takeaway #3: Governance is becoming central to maturity, not secondary to it
One of the clearest signals in the advisory trend lines is the movement from market positioning toward governance. In the sampled agentic AI issues, security and safety concerns rose from 7.1 percent to 33.3 percent, while market and value declined from 46.7 percent to 35.6 percent. That does not mean value matters less. It means the environment is changing. As organizations move beyond early experimentation and into more operational use of AI, they are confronting the practical demands of oversight, auditability, accountability, and production reliability. Governance is therefore no longer something applied after deployment. It is increasingly part of what defines whether an organization is mature enough to scale. This is also where the assessment perspective becomes useful again. Strong tooling or local innovation can create the impression that a program is advancing quickly, but if governance maturity is lagging behind adoption, the organization may be moving faster than its control environment can support. In IIA’s diagnostic language, those asymmetries are often where execution risk becomes most acute.
Takeaway #4: Workforce readiness is becoming the next major constraint, but not on its own
The advisory analysis suggests that workforce readiness is increasingly important, even if it is still underrepresented in formal guidance demand. Among 355 formal AI guidance requests, only 18 focused on talent and change management, despite a broader argument in the analysis that workforce readiness plays an outsized role in determining whether advanced AI creates value in practice. The same analysis points to a persistent reliability problem, with failure rates holding around 13 percent across periods despite rapid technology improvement. The implication is that better models alone do not resolve the issue. Reliability still depends on human oversight, critical evaluation, and the ability to manage exceptions in context. At the same time, the assessment findings suggest that workforce readiness is only one layer of the challenge. Staffing appears in the top ten focus areas, but only after six foundational data issues, while prioritization, iterative approach, and business skills appear lower still. Many organizations, then, are not facing a single missing prerequisite. They are facing a stacked maturity problem in which foundational data issues, operating model gaps, governance needs, and workforce capability gaps all interact. For most enterprises, that is the more balanced way to understand the current state of AI readiness.
Seen together, these findings point to a fuller conclusion than any one dataset provides on its own. AI maturity still depends on analytics maturity. Organizations do not move cleanly from legacy data environments into advanced AI simply because the market has shifted its attention there. The progression is more cumulative than that. First comes data foundation: capture, quality, trust, consistency, integration. Then comes stronger analytical effectiveness: tools, reusable delivery, better alignment to decisions. Then comes operating maturity: clearer ownership, prioritization, business skills, iterative execution, and shared governance. Governed AI can scale only when those layers are strong enough to support it. Agentic AI depends on all of them.
IIA’s work with large, complex enterprises across industries gives us a strong vantage point on the realities non-digital-native companies face in 2026. The patterns emerging from our assessment data and advisory analysis are therefore useful not only for understanding our own client base, but also for interpreting the broader enterprise landscape. With that perspective, it is helpful to think about organizations in three broad maturity groups.
The Baseline
Audit the strength of your data, analytics, and AI operating model. The Baseline reveals structural risk, business misalignment, and what to fix first in 30 days.
Maturity Group #1: Foundation-Constrained
The first group is foundation-constrained organizations. These are firms whose agenda is still dominated by data capture, quality, trustworthiness, consistency, and integration. They may be interested in AI and may already be experimenting with it, but their more immediate challenge is to create a reliable information environment. For these organizations, the main implication is that AI readiness should be framed through foundation work rather than through isolated use cases alone. Efforts to modernize data access, improve trust, simplify integration, and strengthen governance are not separate from AI strategy. They are part of it.
Maturity Group #2: Analytics-Capable, Organizationally Constrained
The second group is analytics-capable but organizationally constrained organizations. These enterprises tend to have stronger data and platform capabilities, but progress is limited by staffing, prioritization, fragmented ownership, weak business alignment, or underdeveloped iterative processes. They can often launch pilots and demonstrate pockets of value, yet struggle to turn those gains into broader enterprise capability. For this group, the implication is that technical maturity has to be matched by operating maturity. Clearer decision rights, better handoffs, stronger business engagement, and more deliberate adoption practices become increasingly important as AI demand grows.
Maturity Group #3: AI-Ambitious, Sequence-Risked
The third group is AI-ambitious but sequence-risked organizations. These firms are moving more assertively into copilots, automation, and agentic use cases. Their challenge is not lack of momentum. It is that deployment ambition may be running ahead of governance, oversight, and workforce preparedness. For these organizations, the key implication is not to step away from innovation, but to improve sequencing. Human-in-the-loop controls, role-specific upskilling, better accountability for reliability, and stronger links between AI use and business value become essential. In this segment, success is often less about access to sophisticated technology than about the maturity of the surrounding system.
For enterprise data and analytics leaders, the current moment should not be understood as a simple shift from analytics to AI. It is better understood as a maturity test across the full stack of enterprise capability. Data foundations still matter. Business decision alignment still matters. Operating model design still matters.
Governance and workforce readiness now matter more than many organizations expected. AI has not replaced these earlier requirements. It has raised the cost of neglecting them.
That is the pattern we see in both the assessment data and the advisory trend lines. Enterprises are moving toward more advanced forms of AI, but many are doing so while foundational and organizational issues remain active. Some will be able to absorb that tension because they have invested in the underlying system for years. Many others will find that visible AI activity is easier to create than sustainable AI capability.
The more useful question, then, is not whether an organization is “doing AI.” It is whether the organization has built the maturity required to support AI in a way that is trusted, governed, reusable, and tied to business performance. That is a higher standard than the market often suggests. But it is the one that matters most now.