Skip to content

The Questions Data and AI Leaders Are Asking Today

In our assessment and advisory work, we are seeing a clear shift in the types of AI questions enterprise leaders bring forward. Early conversations focused on experimentation and tooling. The questions now center on operations, governance, and the growing complexity of agentic systems. 

Most of the organizations we work with are large, complex enterprises that were not born digital. Their data environments span decades of systems, platforms, and organizational structures. Many are still evaluating their true AI readiness or working through the difficult transition from promising pilots to enterprise-scale production. In that environment, AI success depends less on finding the next breakthrough model and more on building the structural foundations that allow AI to operate reliably across the business. 

Some leaders are taking a deliberate approach. Instead of chasing the newest wave of AI capabilities, they are strengthening their data foundations, governance practices, and operating models before committing significant resources. Others are confronting the reality that early AI experiments exposed deeper structural gaps in architecture, ownership, and workflow design. These leaders recognize that scaling AI requires operational discipline, not just new technology. 

Across these conversations, three themes continue to surface. Leaders want to understand how governance should evolve in an AI environment. They want operating models that move AI beyond scattered experimentation. And increasingly, they want guidance on managing agentic AI across a growing multi-vendor landscape. 

Here are three areas where we are seeing the strongest demand for guidance today. 

AI Readiness Assessment

The IIA AI Readiness Assessment (AIRA) is a competency-based assessment of organizational readiness for adopting and deploying deep learning and generative AI applications. Designed to augment IIA’s Analytics Maturity Assessment (AMA), the AIRA focuses on the organizational, technical, and governance conditions required to deploy AI safely, effectively, and at scale.

Rethinking Data Governance for the AI Era

Across many enterprises, data governance leaders are asking a similar question: how should governance evolve now that AI is becoming embedded in analytics and operational workflows? For years, governance programs focused on policies, stewardship roles, and catalog adoption. Yet many organizations still struggle with low engagement from engineering teams and repeated attempts to restart governance initiatives that never gain traction.

The pattern we see most often is that governance challenges rarely originate in policy gaps. They begin in fragmented systems and disconnected workflows. Engineers often work across dozens of repositories, pipelines, and tools where information is difficult to locate, interpret, or trust. When governance focuses on documentation and ownership models instead of improving how people find and use data, adoption breaks down. The organizations making progress start with operational friction — fixing discoverability, access paths, and system integration — and treat governance as the outcome of better workflows rather than the starting point. 

At the same time, many leaders are evaluating whether AI itself can accelerate governance. Current experience suggests a narrower role. AI can assist with extraction, summarization, and lightweight classification tasks. Once governance work requires contextual reasoning or domain understanding, reliability drops and human review becomes necessary. Most organizations therefore use AI as a productivity aid while focusing their governance efforts on system alignment and workflow design. 

Key insights: 

  • AI can assist with limited governance tasks such as document summarization or metadata extraction, but human oversight remains necessary for accuracy and compliance.
  • Governance adoption improves when organizations solve real workflow problems like data discoverability and usability rather than starting with stewardship frameworks.
  • Fragmented systems and unclear ownership paths often block governance progress more than missing tools or policies. 

Designing an Operating Model That Accelerates Enterprise AI

Across many enterprises, AI leaders are asking a similar question: what operating model allows AI to scale beyond scattered experimentation? Many organizations have already introduced copilots, low-code tools, and generative AI capabilities. Early adoption spreads quickly, but the activity often fragments across teams, leaving organizations with dozens of isolated experiments that never compound into enterprise impact.

The organizations making real progress approach AI as an operating model redesign rather than a tooling rollout. AI requires clear ownership, defined workflows, and shared architectural patterns that guide how work moves from experimentation to production. A centralized center of excellence often provides the scaffolding — standards, reusable components, and governance guardrails — while embedded domain pods translate those patterns into workflows inside business functions. This hybrid model allows teams close to the work to redesign processes while maintaining consistency across the enterprise. Check out IIA’s resource hub on federated analytics for a deeper-dive into why we believe this operating model is the ideal end-state for mature D&A programs.  

Another shift appears once organizations begin redesigning how work happens. Injecting AI into isolated tasks produces incremental gains, but the largest improvements come from rethinking entire roles and workflows. Many enterprises discover that routine tasks buried inside job descriptions consume far more time than expected. When teams map these activities and redesign workflows around AI-enabled support, productivity gains compound quickly. At the same time, organizations must guide citizen development through templates, guardrails, and reusable components to prevent fragmentation and support long-term scalability. 

Key insights: 

  • AI adoption accelerates when organizations establish a clear operating model that defines roles, workflows, and ownership for AI initiatives.
  • Hybrid structures — combining a centralized center of excellence with embedded domain pods — scale AI transformation faster than purely centralized or decentralized approaches.
  • The largest productivity gains emerge when organizations redesign entire roles and workflows rather than inserting AI into isolated tasks. 

Governing Agentic AI in a Multi-Vendor Enterprise

Another trend gaining momentum across enterprise data and AI leaders centers on agentic AI. Many organizations already operate predictive models and generative AI workloads at scale, but a new challenge is quickly appearing. Major enterprise software vendors are now embedding agent builders directly into their platforms. Salesforce, ServiceNow, Workday, and other systems allow teams to create agents inside the tools where they already work.

This creates a new platform design question. Centralized AI environments provide consistency, governance, and integration with enterprise data. Yet it is increasingly unrealistic to force every agent into a single platform. The practical approach emerging across enterprises is a hybrid model. High-risk or cross-functional agents often run in centralized environments where governance and monitoring are stronger. Narrow agents tied to a specific application workflow can live inside vendor ecosystems. The key is defining clear criteria that determine where an agent should be built before development begins. 

Agentic systems also introduce operational challenges that traditional model governance does not address. Agents can take sequences of actions, call tools, and interact across systems, which makes monitoring more complex than evaluating model outputs. Enterprises are beginning to build unified monitoring layers that capture each step an agent takes, the data it touches, and the decisions it makes. Governance must also occur earlier in the development cycle. Because agents execute actions and interact with multiple systems, organizations must define permissions, guardrails, and ownership before development begins rather than reviewing risk at deployment time. 

Key insights: 

  • Most enterprises will adopt hybrid agent platforms that combine centralized environments with vendor-native agent capabilities.
  • Agentic systems require step-level monitoring and unified operational visibility across platforms.
  • Governance must define permissions, ownership, and action boundaries before agent development begins.

Making AI Work in the Enterprise

Our all-in-one guide to making AI work inside enterprise analytics, featuring IIA expert frameworks, real client inquiries, and practical guidance to help your team deploy AI confidently and deliver measurable value.