Skip to content

Data Strategy and AI Readiness: Three Critical Questions for Delivery Capability

Last year, I wrote that if a CEO says the company needs an AI strategy, what they are really saying is that the company needs a better data strategy. That argument matters even more now. The hype has not gone away; it has simply moved closer to the core of the business, where the costs of weak foundations are higher, and risks of false confidence are harder to contain. In 2025, the risk was that enterprises would let AI excitement distract them from the hard work of building trusted data, clear priorities, and business alignment. In 2026, the risk is sharper: many organizations are moving ahead as if visible AI activity is evidence of readiness, when in fact it may be masking weak foundations underneath. As I argued last year, companies that chase an “AI strategy” before solidifying a demand-driven data strategy are putting the cart before the horse. If your CEO wants AI, what they really want is the data strategy and foundation capable of sustaining it.

Next month, Tom Salas, IIA Expert on AI governance, is shedding light on five key AI trends based on the analysis of over 1,000 advisory conversations from 2024-present. Tom’s session will show that we’re seeing more energy around AI operations, governance, and agentic implementation, even as foundational questions around lifecycle discipline, measurement, and data readiness remain comparatively underdeveloped in many enterprises. Another trend line in our advisory analysis points in the same direction: complexity is rising across tech platforms, data quality, and change management at the same time. We may read this data as a sign of market maturity. And in one sense, it is. Enterprises are moving beyond asking whether AI can be built and toward asking how it can be deployed, governed, and used responsibly. It is also a warning that many organizations are trying to get agentic AI programs off the ground before they have built the delivery environment required to support them. In our view, this trend line is a canary in the coal mine.  

To be clear, some enterprises do have the foundation to move aggressively here. They have invested in data quality, integration, governance, and operating discipline for years, and they are better positioned to turn agentic AI into something sustainable. But most organizations, based on our assessment insights and advisory work, likely do not. Many may not know what questions to ask about their data strategy as they try to stay on top of AI advancements.  

Over the next few weeks, I want to examine data strategy and foundations in the context of AI through four practical questions enterprise data and analytics leaders should be asking. Can we deliver analytics and AI at scale? Is the business using them to make better decisions? Is our operating model enabling success or blocking it? And where do we stand relative to peers? Each of these questions points to a different pillar of enterprise readiness. This first blog begins on the supply side, with delivery capability — the quality, integration, metadata, and fit-for-purpose architectural conditions that determine whether AI can move beyond isolated pilots and become something scalable, supportable, and trustworthy in the enterprise. 

Supply-Side Question #1: “How does data move from system to source?”

One of the more persistent mistakes in enterprise AI is treating the supply side as a storage problem. It is not. For most organizations, the issue is not whether data exists somewhere in the estate. The issue is whether it can move — with enough timeliness, enough context, and enough reliability — to the point where an analytical or AI application can use it well. A strong data strategy does not begin by admiring what sits in source systems. It begins by understanding the information economy those systems are meant to serve: who needs what data, how fast they need it, at what level of detail, and under what constraints. In IIA’s view, the point of data strategy is to improve the availability, timeliness, and quality of data, in that order, for the constituencies that depend on it. That ordering matters more in AI than many leaders realize. AI programs rarely fail because the enterprise lacks data in the abstract. They fail because the data does not arrive where it is needed, when it is needed, in a form that is usable enough to support the task at hand.

A more useful way to frame the discussion is operationally rather than abstractly. Broad judgments about whether data is “good” tend to obscure the conditions that most directly affect AI readiness. Is the data available at the lowest useful grain, or has it been over-aggregated for legacy reporting needs? Does it move with enough frequency to support the workflow we have in mind, or are we asking AI to operate on stale snapshots? Does the receiving team get the metadata required to understand semantics, provenance, caveats, and fitness for purpose, or are they inheriting a black box and calling it readiness? These are not technical afterthoughts. They are supply-side design choices. And they often determine, well before model performance enters the discussion, whether AI can become something more than a protected experiment.

There is also an important mindset shift here. In many enterprises, source system owners still behave as though their job is to perfect data before making it broadly usable. That instinct is understandable, especially in regulated environments. But it can become a constraint. IIA’s research has long argued that the role of source systems in a modern analytics environment is not to produce perfect data. It is to make data available as close to real time as possible, at the lowest practical grain, with enough metadata to help downstream users judge quality, meaning, and limitations for themselves. Dirty data is often the natural state of enterprise data. The strategic question is not whether you can eliminate that reality. It is whether your architecture, metadata, and governance practices make that reality visible and manageable.

As organizations push toward agentic AI, the quality of the delivery environment starts to matter even more. These systems require data flows that preserve context, support traceability, and can hold up under repeated use. When the supply side still depends on bespoke extracts, informal handoffs, and local interpretation of what fields mean, the challenge extends beyond data quality alone. The enterprise has not yet built a dependable path from source to use, which makes this a data strategy issue as much as an AI one. For data and analytics leaders, the more productive starting point is to ask where needed data gets delayed, distorted, or stripped of context; which flows are truly reusable versus heavily customized; and where metadata travels with the data versus where meaning gets lost along the way. Those answers tend to give leaders a much clearer picture of AI readiness —and a more practical starting point for improvement.

Supply-Side Question #2: “What does ‘fit for purpose’ mean for the data we want AI to use?”

Quality becomes more useful when it is defined in relation to purpose. In many organizations, data quality is still treated as a universal threshold: the data is either clean enough to release or it is not. That sounds disciplined, but it often obscures what matters most for AI readiness. Some uses require engineered certainty and tightly governed consistency. Others require access to lower-grain, less-refined data precisely because signal lives alongside noise. In IIA’s view, source systems are not capable of producing perfect, fully consistent enterprise data, and most organizations cannot afford to engineer perfection into every flow. A stronger supply-side posture makes data available with clear metadata about semantics, caveats, and known quality issues, so teams can judge whether it is fit for the task in front of them.

Here, the pursuit of “one version of the truth” can become a trap. For traditional information consumers, a singular answer is often essential. There cannot be nine different revenue totals for the same quarter. But AI and advanced analytics are not always consuming finished information; they are often working from raw material. Over-normalization, over-aggregation, and excessive cleansing can remove useful variation, flatten important context, and narrow what the data can reveal. In that setting, quality still matters, but fitness for purpose may require access to both engineered data for standardized consumption and less-engineered source data for analytical work.

A better starting point for data and analytics leaders is to make the tradeoffs explicit. Where should quality trump availability, and where should availability win with caveats attached? Which use cases require harmonized, highly curated data, and which are better served by lower-grain data with clear lineage and metadata? Where is the organization optimizing for control in ways that unintentionally limit analytical value? Those are the kinds of supply-side questions that move the conversation forward. They help data and analytics leaders shift from abstract quality debates to practical decisions about access, readiness, and the kind of data environment AI needs.

Supply-Side Question #3: “Have we built a reusable path to data — or is every AI use case still negotiating its own way through the enterprise?”

This may be the most important supply-side question of all, because it shifts the discussion from isolated readiness to institutional readiness. A surprising number of organizations can support one promising use case with enough effort, enough executive attention, and enough local problem-solving. Far fewer have built a data environment that makes the next use case easier, faster, and more reliable to support. That is where data strategy shows its value. A strong data strategy creates a repeatable path from source to analytical use, with the right level of detail, the required timeliness, and enough metadata and lineage to preserve meaning and support governance at scale.

For data and analytics leaders, this is where the supply-side conversation becomes more strategic. The issue is no longer whether a team can get access to data for a single model or workflow. The issue is whether the enterprise is reducing friction over time. Are integration patterns becoming more reusable? Are metadata and semantics traveling with the data more consistently? Are source-side decisions improving downstream accessibility and traceability, or are teams still compensating for the same structural weaknesses project after project? Enterprises do not become AI-ready because they accumulate pilots. They become AI-ready when their delivery environment starts producing leverage.

In closing, I encourage you to ask this question as a practical test: If AI depends on data arriving at the right place, at the right time, at the right level of detail, with the metadata and controls needed to use it responsibly, have we actually built a supply-side environment designed to deliver that — or are we still forcing every new use case to negotiate its own path to data? The answer to that question says a great deal about where your data strategy stands today, and where the real work begins next. 

Data Strategy Hub

Get practical frameworks and IIA Expert guidance to strengthen the dynamic between data security and analytics delivery.