Skip to content

Data Strategy and AI Readiness: Is Your Operating Model Holding You Back?

In the first blog in this series, I focused on the supply side of AI readiness, specifically the quality, integration, metadata, and architectural conditions required to move AI beyond isolated pilots. In the second, I shifted to the demand side and asked whether data and AI efforts are tied to the business decisions that matter most. But there is a third question enterprise leaders need to confront, and AI is making it harder to avoid. Is the organization itself set up to scale data and analytics well, or is the operating model now part of the problem?

AI arrives inside an enterprise environment that already has its own structure, habits, and constraints. It enters an organization shaped by org charts, funding decisions, approval paths, platform ownership, governance habits, and long-standing assumptions about who gets to do what. In many organizations, those arrangements were tolerable before. They may have slowed delivery, created duplication, or generated periodic disputes over ownership and standards. But they were manageable enough that leaders learned to live with them.

AI changes that tolerance level.

As demand for data products, automation, model deployment, governance, and business-facing insight accelerates, legacy operating weaknesses become much harder to hide. Centralized teams that once provided control begin to slow the business down. Distributed teams that once created speed begin to look fragmented and difficult to coordinate. Informal collaboration that once held things together begins to fail under the weight of more use cases, more scrutiny, and higher expectations around trust, traceability, and value delivery.

What looked like an organizational inconvenience starts to look like a structural constraint.

With the acceleration of AI experimentation and enterprise-scale initiatives, operating model questions now sit much closer to the center of data strategy and AI readiness than before. Platforms, pipelines, and governance are only part of a strong data strategy. The strategy also needs to clarify how the enterprise operates, including how work gets done, how decisions get made, how accountability is shared, and how business-aligned data and analytics capabilities scale. As AI increases the volume, speed, and complexity of enterprise demand, those operating questions move from the background to the center of strategy.

With that in mind, here are three considerations data and analytics leaders should be pressing on now. 

[Webinar] Data Strategy or Bust: Thriving in the Age of AI

AI is raising the stakes for every enterprise. A strong data strategy turns AI from a planning exercise into an execution discipline. In this session, we’ll examine the strategic choices leaders must make to turn AI ambition into compounding business value.

Operating Model Consideration #1: “Are we still treating operating friction as normal?”

One of the more dangerous habits in enterprise analytics is the normalization of friction. Over time, organizations get used to long queues, redundant work, unclear ownership, tool inconsistency, shadow teams, and local data workarounds. People build coping mechanisms. Business stakeholders learn who to call off the record. Analysts keep private extracts. Central teams inherit intake processes that grow more bureaucratic every year.

What should be seen as structural weakness instead becomes part of the culture. We’ve seen time and again legacy analytics operating models persist because people have adapted to their limitations.  

AI puts that adaptation under stress. When the business wants faster experimentation, more embedded decision support, tighter governance, reusable data products, better model oversight, and more visible business value, the cost of friction rises quickly. A central team that was merely slow in a dashboarding world may become a material obstacle in an AI-enabled one. A distributed model that once looked entrepreneurial may start producing conflicting definitions, inconsistent controls, duplicated engineering, and a level of local variation the enterprise can no longer manage responsibly. The problem becomes a mismatch between how the enterprise works and what AI scale requires.

Operating model assessment belongs inside data strategy, not beside it as a separate organizational exercise. Leaders need an honest view of how the current environment performs not only in architecture and governance, but also in decision-making processes, culture, and business alignment. Does the current state enable better decisions? Is governance experienced as value-added or as a hindrance? Are business beneficiaries, demand-side users, and IT operators aligned on what the environment is supposed to do? The answers reveal whether the current model can support what the enterprise is now asking of data and AI.

In practice, this means leaders should pay close attention to the internal conversations that keep recurring. We’re building things locally, but no one else can see or build on them. Every team defines this differently. We asked the central group months ago and are still waiting. Our best analysts are stuck maintaining one-off assets. When those kinds of frustrations become common, they usually indicate that the current theory of operations is no longer working. The structure may still exist on paper, but it is no longer producing the behavior the enterprise needs. 

The Baseline

Audit the strength of your data, analytics, and AI operating model. The Baseline reveals structural risk, business misalignment, and what to fix first in 30 days.

Operating Model Consideration #2: “Have we designed for both local speed and enterprise coherence?”

Once leaders accept that the current model may be part of the problem, the next temptation is to swing to the opposite extreme. Enterprises frustrated with sprawl try to centralize. Enterprises frustrated with bottlenecks push more work into the business. Both moves are understandable. Neither is sufficient on its own.

This is one of the clearest lessons from our federated analytics work with clients. Centralized models can improve consistency, visibility, and control, but often slow delivery and distance analytics from decision-makers. Distributed models improve responsiveness and local relevance, but tend to fracture enterprise definitions, duplicate effort, and drive up support and governance costs. Many organizations move back and forth between these poles without solving the underlying problem. We call this the “oscillation trap” and it is a natural reaction to the pain each model creates.  

A better answer is to design for both local execution and shared enterprise discipline. Large organizations have to work within a basic reality. No single central team can meet all analytical needs at the speed of the business, but local autonomy without shared infrastructure, governance, and methods will not scale. The operating model has to support both.

In practice, that means a central group that stewards shared platforms, enterprise data products, governance frameworks, and advanced capabilities, alongside embedded teams that stay close to business context and deliver decision support where it is needed. Autonomy is preserved, but not unbounded. Governance is enforced, but not suffocating.

The operating model must be more than an org chart. It needs explicit definitions of ownership, boundaries, and handoffs. What belongs centrally? What belongs locally? Which data products are enterprise assets, and which are domain-specific? Who defines standards? Who implements them in context? Who pilots advanced analytics locally, and who hardens reusable capabilities for broader use? Without that level of definition, the enterprise slips back into ambiguity, which is where duplication, dropped work, and political friction tend to grow.

This design challenge becomes more important under AI pressure because the volume and diversity of analytical work increase at the same time. Some use cases need deep business proximity. Others need enterprise controls. Some begin as local MVPs and deserve broader reuse later. Some should remain domain-specific. A healthy operating model has a way to absorb that variation without losing coherence. Visible playbooks, operating rules, decision forums, and roadmaps are extremely useful here. They make the model legible enough for people to work inside it with confidence.

Operating Model Consideration #3: “Are we treating culture as an operating requirement?”

In our assessment and advisory work, we see organizations underestimate the role of culture in the analytics operating model. Yes, data and analytics leaders may accept the need for clearer roles, better governance, and more intentional structure, but too often assume that if the org design is sound enough, the people side will follow. Usually, that is not the case.  

Distributed analytics work does not stay aligned simply because a slide says it should. Teams need shared norms, shared language, visible escalation paths, and repeated opportunities to learn from one another. New analysts need to understand how the enterprise works at large, while embedded teams need meaningful ways to shape standards and roadmaps. Central teams, in turn, need to be experienced as partners that enable progress across the business. None of that happens on its own. It has to be designed, funded, and maintained.

One feature that distinguishes successful federated analytics operating models is that they treat community-building as part of the model itself. Onboarding, rotational programs, cross-team forums, internal summits, common methods, and shared playbooks help analytics professionals understand their work as part of a broader system rather than as isolated activity. Over time, that can support shared adoption, reduce redundant invention, strengthen trust across teams, and make it easier for local innovation to contribute to enterprise capability.

The same is true for handoffs and governance. In a scaled environment, embedded teams should be able to solve urgent problems quickly, but when those solutions prove useful beyond one team or function, the organization needs a defined way to promote, harden, document, and maintain them. Without that, isolated wins do not turn into broader capability, local teams end up maintaining production-grade assets, and central teams risk rebuilding work that already exists. Governance matters here as well. When standards are experienced only as restrictions, teams work around them. When responsibilities, expectations, and support are clearly defined, teams are more likely to operate confidently within a shared framework.

The broader point is that AI raises the standard for how enterprises have to operate. Speed without coordination, autonomy without shared discipline, and innovation without defined handoffs will not carry very far. Sustainable progress depends on an operating model built to support the demands AI is now placing on the enterprise

What Every Enterprise Needs to Know about Federated Analytics

Explore our all-in-one resource hub for federated analytics models. Discover what a healthy federated analytics operating model looks like in practice, how to successfully transition to a federated approach, and much more. It's the full picture, in one place.