Jump to:
- How do we turn our AI ambition into business outcomes that leadership will recognize and support?
- How do we deploy AI safely when our data and governance foundations are still maturing?
- How should we structure our teams and operating model to accelerate AI across the enterprise?
- How do we decide which AI capabilities to build ourselves and which to source from vendors?
- How do we govern AI systems that learn, change, and behave in ways we cannot fully predict?
- How do we encourage responsible AI use across the workforce when familiarity and trust vary widely?
- How do we connect all the AI tools our teams are adopting so they work together instead of working at odds?
TBD
Introduction: The Pressures and Constraints Shaping Enterprise AI
Every enterprise is feeling the pressure to “do something” with AI. Boards are asking for strategies. Executives are asking for pilots. Vendors are embedding AI into every product and insisting that transformation is one subscription away. Yet in our work across industries, IIA sees a very different picture inside the organizations that must deliver on these expectations.
Leaders are not short on ambition. They are short on clarity. They face a flood of new capabilities but still wrestle with the same foundational constraints: fragmented data, strained governance, unclear ownership, inconsistent operating models, and a widening gap between what vendors promise and what the enterprise can realistically support. The result is a landscape where experimentation is easy but scale is elusive.
IIA’s position is grounded in fifteen years of analytics maturity assessment and advisory work. We see what happens when AI collides with legacy systems, entrenched processes, and organizational complexity. We see the hidden failure modes long before they show up in the glossy case studies. Most importantly, we see where progress is happening and why.
This resource hub brings those insights together through recurring questions that surface across client engagements. They reflect the practical tensions leaders face at large legacy enterprises as they connect advanced AI capability with enterprise realities, and they highlight the decision patterns, constraints, and opportunities shared across industries. If you are navigating similar pressures, you are not alone; these are the same challenges your peers are wrestling with as they work to align ambition, readiness, and risk.
1. How do we turn our AI ambition into business outcomes that leadership will recognize and support?
Across IIA’s advisory and assessment work, leaders describe a growing gap between the urgency of AI ambition and the organization’s ability to anchor that ambition in business value. Boards push for visible progress, yet many early requests arrive as broad prompts such as “tell me what the data says” or “use AI to fix flat sales.” These signals point to a deeper issue. Conversations begin with solutions instead of problems, and teams are asked to deploy techniques before the desired outcome or future state is defined. This pattern produces long use-case lists, scattered pilots, and rising expectations while making value harder to achieve.
IIA’s experience shows that the organizations gaining momentum follow the same discipline that enables success with traditional analytics. Leaders start with the business decision, the process that must change, and the single metric that best reflects success. Only after defining these elements do they choose among symbolic automation, predictive modeling, generative capabilities, or agentic workflows. This approach keeps teams from applying the newest tool everywhere and grounds ROI in operational reality. Early AI gains often take the form of modest efficiency improvements because the largest costs lie in the process, cultural, and behavioral changes required for adoption.
Value accelerates when enterprises build vertically along a value chain instead of launching isolated experiments. Reusing data pipelines, feature sets, and workflows across adjacent problems increases ROI and shortens delivery cycles. The strongest outcomes come from leaders who frame AI as a transformation effort that requires workflow integration, user trust, and sustained alignment with stakeholders. The gap between ambition and value narrows when AI work returns to business language and follows a sequence of work that can be executed inside the enterprise.
2. How do we deploy AI safely when our data and governance foundations are still maturing?
Across IIA’s work with enterprise data and analytics leaders, one pattern appears more often than any other: organizations feel urgent pressure to advance AI, yet their data, governance, and operating foundations are far from ready for the level of automation they are attempting. Boards want fast progress, executives want visible wins, and suppliers insist that delay is risky. Meanwhile, the essentials that make AI safe and scalable—data quality, lineage, controls, human oversight, and accountable ownership—remain incomplete or unstable. This gap between ambition and readiness creates the conditions where AI deployment becomes fragile and unpredictable.
When organizations push ahead without addressing those gaps, the consequences follow a familiar arc. Models are deployed on top of inconsistent or poorly defined data. Copilots are put in front of employees with little guidance on what they should or should not do. Vendor tools proliferate faster than governance structures can adapt. Sensitive information gets uploaded to external systems because internal alternatives do not exist. These patterns lead to abandoned pilots, hallucinations, compliance exposure, shadow AI, and rising concern within legal, privacy, and security teams. The issue is rarely the model itself. The issue is the environment in which the model operates.
IIA’s view is that enterprises can still move forward with AI even when foundations are immature, but only if leaders adjust their approach. The question is not how quickly AI can be deployed, but how safely it can be deployed at the organization’s current level of readiness. Teams that succeed take a containment-first approach, selecting use cases that rely on reliable data, defining where AI is allowed to fail, and mapping the cost of those failures. They introduce governance that accounts for probabilistic output, hallucinations, drift, and explainability. They design guardrails for people as well as models, limit early deployments to controlled environments, and create clear policies for vendor models and data-sharing risks. Over time, they expand from low-risk copilots to workflow augmentation and eventually to selective autonomy in domains that can tolerate it. This creates a path of safe acceleration rather than precautionary paralysis and allows maturity to strengthen through use, evaluation, and iteration.
3. How should we structure our teams and operating model to accelerate AI across the enterprise?
Across industries, IIA sees a consistent pattern: the operating model that supports analytics does not support enterprise AI. Analytics depends on accuracy, documentation, stability, and predictable workflows. AI requires something different. It moves faster, spans more functions, and depends on skills and decision paths that do not exist in most analytics organizations. When enterprises try to scale AI by extending their analytics operating model, they encounter the same problems every time: bottlenecks in centralized teams, political friction over ownership, shadow experimentation at the edges, and fragmented innovation scattered across tools and platforms.
Leaders who gain momentum recognize that AI is a shift in operating rhythm. They shorten the path from idea to deployment, reduce handoffs between analytics, engineering, and business teams, and clarify who owns risk, value, funding, and delivery. They also recognize that AI is not an analytics exercise. It touches legal, security, architecture, procurement, HR, product management, and frontline operations. One team cannot scale AI alone. Progress comes from a cross-functional operating fabric that balances shared platforms and governance with distributed problem definition and adoption.
IIA’s view is that there is no single correct structure. The right model is the one that makes value creation repeatable. High-performing organizations build a hybrid approach that centralizes standards, platforms, and security while distributing execution to the business. They create clear roles for product managers, ML engineers, domain translators, and risk partners. They shift funding away from one-off projects and toward shared platforms and reusable components. They design decision paths that move quickly and embed governance throughout the lifecycle. And they treat AI as a business transformation effort rather than a technical capability delivered by a central team.
4. How do we decide which AI capabilities to build ourselves and which to source from vendors?
Across IIA’s advisory work, leaders describe persistent uncertainty about where to build internal AI capability and where to rely on vendors. Many feel pressure to demonstrate progress and end up making premature decisions that lock them into platforms they cannot afford or systems they cannot maintain. Others swing in the opposite direction and over-rely on vendors for components that should be distinctive to their business. These patterns leave organizations with pilots that do not scale, infrastructure that grows faster than governance, and investments that fail to create advantage.
IIA’s view is that AI value depends on knowing what to own. Some capabilities only pay off when they reflect proprietary data, domain nuance, or process knowledge. Others create no strategic differentiation and are better purchased or assembled. And some capabilities, especially those tied to foundation models or high-performance infrastructure, require scale and research investment that enterprises cannot sustain on their own. Leaders who bring discipline to these decisions accelerate AI delivery without losing control of outcomes.
The most effective approach is a portfolio posture. Build where proprietary knowledge shapes performance, buy commodity capabilities that vendors already optimize at scale, and partner for components that demand advanced engineering or infrastructure. This lens shifts the conversation from “Can we build it?” to “Should we own it?” and prevents the common trap of pursuing custom builds that later collapse under maintenance, compliance, or cost. It also helps leaders anticipate long-term operational load, negotiate vendor relationships with intention, and architect flexible ecosystems that adapt as models and economics evolve.
5. How do we govern AI systems that learn, change, and behave in ways we cannot fully predict?
One of the most persistent challenges raised in IIA’s advisory work is that traditional governance methods do not fit the systems leaders are now expected to oversee. Predictive models were already difficult to monitor. Generative and agentic systems introduce new complexities: outputs that shift with each prompt, models that perform multi-step reasoning, and workflows in which machines call other machines. These dynamics make it impossible to rely on deterministic governance playbooks that assume stable behavior and predictable outputs.
Leaders consistently ask how to prevent hallucinated content from making its way into customer-facing or operational reports, how to detect vendor-driven model changes before they disrupt business workflows, and how to diagnose multi-agent behavior when each agent makes decisions based on a chain of previous actions. Many also struggle with basic visibility. They lack logs that show what an agent saw, how it interpreted a situation, and which tools it selected. They are also discovering that drift now occurs in three places at once: the underlying data, the models themselves, and the updates vendors push into the ecosystem without warning. At the same time, teams must release AI capabilities quickly to meet executive pressure, creating a governance gap that exposes the enterprise to avoidable errors and compliance risks.
The organizations that navigate this environment successfully treat governance as a design discipline. They validate systems continuously rather than relying on one-time certification, testing behavior across prompt sets, edge cases, and scenarios where errors have material consequences. They classify use cases by error tolerance so they can define when AI acts as an assistant under human supervision versus when it can operate with more autonomy. They build monitoring into daily workflows, placing controls at decision points rather than only at model checkpoints. They invest in fact-checking and verification skills across the workforce because generative systems produce fluent but unreliable content. They strengthen human oversight wherever consequences are high and adopt federated execution models so business units own monitoring within shared guardrails. They also negotiate vendor contracts that include transparency, lineage, and update notifications, recognizing that vendor changes can alter system behavior overnight. As autonomy grows, they reinforce fallback mechanisms and expand supervision so the enterprise can absorb unexpected outcomes without business disruption.
Roundtable Peer Insights: Change Management and AI
Data and analytics leaders shared insights on their approach to change management and AI. Download the key themes and takeaways below.
Roundtable Peer Insights: Building and Leading AI Teams
Data and analytics leaders shared insights on building and leading AI teams. Download the key themes and takeaways below.
6. How do we encourage responsible AI use across the workforce when familiarity and trust vary widely?
Every enterprise leader we work with faces the same dilemma. Business teams want AI embedded in their daily work, and vendors are pushing copilots and automated features into every SaaS application. Yet the organization’s foundational readiness in areas such as data quality, governance, security, and literacy often lags behind. Leaders are caught between encouraging adoption and preventing misuse, between democratizing capability and avoiding a free-for-all.
IIA’s advisory work shows that most organizations are not struggling to begin AI. They are struggling to spread it. Technical teams can build pilots in controlled environments, but scaling requires managers, frontline workers, and subject-matter experts to interact with AI systems directly. This shift introduces risks that are difficult to monitor: leakage of sensitive information into external systems, propagation of hallucinated content, shadow AI tools that bypass governance, unvetted workflows that influence decisions, and a loss of visibility into where automated logic affects business outcomes.
The organizations that expand safely treat scaling as an organizational transformation. They make AI accessible to non-technical roles but within structures that protect the business and reinforce consistent habits. They invest in literacy so employees know when to trust or question system behavior. They embed guardrails directly into tools through redaction layers, input filters, approved model lists, and transparent usage logging. They reduce shadow AI by offering sanctioned, low-friction alternatives that meet business needs. They pair central standards with distributed execution so business units can scale adoption without losing control. They design role-specific interfaces rather than generic chatbots, which limits misuse and guides employees toward productive patterns. And when exploring more advanced capabilities such as agentic AI, they add monitoring, logging, and clear reversal paths so actions remain supervised and recoverable.
7. How do we connect all the AI tools our teams are adopting so they work together instead of working at odds?
As AI capabilities accelerate, enterprise architectures are being stretched by competing platforms, embedded vendor features, departmental pilots, and emerging agentic systems. Leaders tell us their AI ecosystem feels less like a strategy and more like a chain reaction, with new tools appearing faster than they can be evaluated or governed. The result is fragmentation across the stack: duplicated development, inconsistent standards, incompatible models, and a growing mix of SaaS applications that quietly introduce AI into daily work.
IIA’s work with clients shows that the central challenge is not selecting a single platform but creating an architecture that can absorb innovation without destabilizing the business. A unified ecosystem does not require standardizing on one cloud provider or foundation model. It requires the connective tissue that allows multiple components to operate safely and predictably. Data contracts, lineage tracking, shared access patterns, and common monitoring frameworks do more to enable interoperability than any model decision. When these elements are weak, even strong tools function as isolated islands.
The organizations that keep pace with rapid change treat ecosystem design as an ongoing discipline. They assume that models, vendors, and capabilities will shift, so they design for flexibility rather than permanence. They build guardrails that allow rules engines, predictive models, LLMs, and agents to function together with clear accountability for each decision point. They implement contractual controls that restrict vendor data usage, adopt logging requirements that give full visibility into model behavior, and use modular, API-first architectures to avoid hard dependencies. Their goal is resilience: an ecosystem flexible enough to adopt new AI tools, structured enough to prevent chaos, and transparent enough for leaders to understand how AI influences operations.
Top Client Inquiries on AI in 2025
- How should data and analytics leaders rethink data governance to meet the demands of AI while strengthening their foundational practices?
- How are leading organizations interpreting and operationalizing the EU Data Act, and what practical approaches can help us comply amid unclear and evolving guidance?
- How are leading organizations applying AI across supply chain use cases, and what concrete examples show how these solutions deliver measurable business impact?
- How should enterprises think about applying generative AI to large-scale numerical data, and what use cases actually create value today?
- What AI operating structures—particularly domain-specific centers of excellence—are proving most effective for accelerating enterprise transformation?