At this year’s International Institute for Analytics (IIA) Symposium, we gathered in Pittsburgh—fittingly, the city of 446 bridges—to explore what it means to bridge the foundations of analytics with the frontiers of AI. Hosted once again at Highmark Health’s headquarters, a long-standing client of IIA’s, the event brought together data and analytics leaders from across industries to exchange insights, test ideas, and compare lessons from the edge of enterprise transformation.
As Jack Phillips, IIA’s CEO, noted in his opening remarks, “Our year has been defined by this uneven terrain ahead—where every organization is asking not only how to do AI, but how to remain relevant as analytics leaders.” From new frameworks for responsible AI governance to the practical realities of data modernization, every session returned to a central idea: AI’s success will depend on how well we extend, not abandon, the foundations of analytics that brought us here.
The Human Side of Enterprise Transformation
The day began with a fireside chat between Jack Phillips and Mike Bennett, chief strategy and transformation officer at Highmark Health. Bennett reframed enterprise transformation as a human challenge first—one that relies on analytics not simply to inform strategy, but to shape how organizations think and change. “My job,” he said, “is herding cats, herding information, and herding insights.”
Bennett described Highmark’s “moonshot” ambition: to guide every person toward the best health decisions through a complete understanding of their biology and context. What began as aspiration has become tangible through AI, yet the greatest barrier is not technology, but rather it’s organizational readiness. Executives often drown in data, he warned, without translation or narrative. Analytics leaders must serve as translators, curating the few insights that truly alter business outcomes.
He emphasized that data storytelling is now an essential leadership skill. “Don’t just show me 16 charts,” he said. “Tell me what they mean for my business.” For Bennett, analytics maturity today means evolving from service provider to strategic partner, where D&A organizations must challenge assumptions, surface blind spots, and embed analytics as the connective tissue of enterprise strategy.
Rebalancing the Hype: Bridging Analytics and AI
Next, Nan Li, IIA Expert and founder of Nanalytics AI, challenged leaders to build their AI strategies on the proven disciplines of analytics, emphasizing clear problem definition, ROI-based prioritization, and deep business understanding. Her presentation, “Bridging the Power of Traditional Analytics to the Promise of AI,” cautioned against what she called the “GenAI hammer” mindset, the tendency to treat every problem as an AI problem.
Li introduced a decision-tree framework to help leaders determine when to use rules-based automation, traditional analytics, machine learning, large language models, or agentic AI. The framework, she explained, begins not with the technology but with five business questions: What state are we trying to reach? How structured is the process? What level of precision is required? How much error is acceptable? And what data do we truly trust?
Her message resonated with many attendees. The path to AI maturity is not a leap but a layering, a building from existing analytics capabilities and scaling selectively. She urged leaders to think in terms of compounding value and deepening use cases within a single value chain rather than scattering pilots across the enterprise. The session’s core takeaway ties directly to one of IIA’s key tenets: the bridge to AI is built from the foundations of analytics already in place.
Learning from Failure: A Framework for Sustainable AI
In “My AI Framework Failure Points and How I Fixed Them,” Adam McElhinney, IIA Expert and CEO of Uptake, distilled two decades of experience into a pragmatic roadmap for avoiding common AI pitfalls. His starting point was blunt honesty: “Most AI projects fail—and usually for reasons that have nothing to do with algorithms.”
McElhinney identified seven recurring failure points, from weak production plans and absent validation data to poor post-launch monitoring. His approach centers on aligning every project to measurable business KPIs and preserving 25–40% of team capacity for ongoing maintenance and validation. In his experience, model performance is irrelevant if the business impact isn’t measured in deflection rates, cost savings, or revenue.
He also warned against overreliance on closed foundation models, citing a case where a third-party model update derailed production performance overnight. To mitigate that risk, his team implemented a router pattern architecture, giving applications the flexibility to switch between commercial and self-hosted models for stability and control.
The centerpiece of his talk was a 10-step AI framework—an end-to-end process from business problem definition to post-deployment governance. McElhinney’s message was both cautionary and empowering, emphasizing that successful AI depends on pursuing fewer, higher-impact projects supported by disciplined feedback loops. “If your model works but no one acts on it,” he said, “you don’t have a technical problem—you have a business one.”
Scaling Safely: Boeing’s Enterprise GenAI Blueprint
The conversation on practical AI governance continued with Siva Nallusamy, executive director and head of data and AI Platforms at Boeing, in a fireside chat with Jack Phillips. Nallusamy’s story was a case study in scaling GenAI responsibly within a complex, global enterprise. Boeing’s AI program now reaches 170,000 employees, including 58,000 active users and 4,000 developers.
From the outset, the Boeing D&A team approached GenAI with the same rigor applied to flight systems, prioritizing safety above all else. Rather than viewing regulation as constraint, Boeing transformed compliance into a design principle. The company codified four governance pillars—safety and quality, explainability and transparency, security and compliance, and human oversight—anchored by a zero-trust architecture and an internal “unified data access layer.”
Boeing also restructured governance to accelerate innovation. Its AI Technology Council oversees a risk framework modeled after the EU AI Act, classifying applications from “low” to “prohibited” risk. Lower-risk projects can move quickly, while high-risk ones undergo close co-creation with compliance teams.
Beyond governance, Boeing invested heavily in AI literacy and democratization, hosting an annual AI Summit that draws more than 5,000 participants. As Nallusamy noted, Boeing’s future focus is shifting from productivity tools to industrial-grade agentic AI in safety, quality, and manufacturing. His closing message echoed throughout the day: success comes not from chasing frontier models but from redesigning the foundations—data, governance, and culture—to sustain flight in an AI-powered world.
Making Responsible AI a Business Capability
In the afternoon, IIA Expert Dave Cherry moderated a panel on “Building AI Governance for Business Impact,” featuring IIA Expert Michael Barber (Highmark Health), Dr. Elizabeth Adams (Minnesota Responsible AI Institute), and Morgan Templar (First CDO Partners).
Panelists defined responsible AI as both ethical and operational, describing it as a practice that embeds transparency, fairness, and explainability into business workflows, not just policy documents. Governance, they argued, should accelerate innovation by providing clarity and consistency. As one example, Highmark’s Responsible AI office operates with a “culture of yes,” where use cases are approved after collaborative refinement rather than denied outright.
The group explored how to design governance that fits within daily operations, integrating with security, privacy, and compliance teams while empowering employees to use AI confidently. Effective governance, they agreed, balances top-down accountability with bottom-up literacy. Executives must set expectations, but every employee should understand how responsible AI applies to their role.
The panel also tackled pressing realities, including inconsistent global regulation, employee anxiety about job security, and the growing need for continuous risk monitoring as agentic AI becomes more autonomous. Their practical advice to peers was clear: pause before scaling, assess what governance already exists, and measure responsibility not through checklists but through cultural readiness and business impact.
The Messy Middle of AI Architecture
The final fireside chat of the day featured Tom Thomas, SVP of data strategy, analytics and AI at FordDirect, in conversation with Jason Larson from IIA. Thomas provided a detailed look at what he called the “messy middle” of AI—the complex intersection of architecture, data, and context that determines whether enterprise AI succeeds.
He began by grounding FordDirect’s AI journey in its unique data environment. As a joint venture between Ford Motor Company and 2,900 dealerships, FordDirect manages customer, inventory, and behavioral data spanning 25 years of automotive interactions. That foundation, he explained, supports a “customer and vehicle journey platform” that connects advertising impressions to service transactions.
Thomas outlined his framework for balancing centralized and decentralized AI innovation. Enterprise-level initiatives (centralized) deliver measurable ROI and product enhancements, while decentralized experimentation drives agility and “return on the person.” Both, he argued, are essential to enterprise adoption.
His deep dive into context engineering—the process of structuring prompts, metadata, and retrieval logic around LLMs—offered one of the most technically grounded sessions of the day. Using FordDirect’s Insight Engage product as an example, Thomas explained how his team built a voice-enabled dealer assistant that allows users to query live data instead of toggling across multiple dashboards. Early prototypes relied on LLM-generated SQL and achieved only 70% accuracy. By decomposing the workflow into a multi-step retrieval-augmented generation (RAG) pipeline, accuracy rose to 90%, turning a proof of concept into a business-ready product.
Thomas’s closing reflection underscored a theme that had run throughout the day, highlighting the integration of people, process, and data as the core of AI maturity. “Our next milestone,” he said, “is putting a wrapper around everything we built so that the data can be actioned easily,” a succinct expression of how architecture becomes strategy.
Bridging Foundations and Frontiers
Across every session, the same message emerged, emphasizing that AI success depends not on abandoning traditional analytics but on extending its principles—clarity, governance, measurement, and human judgment—into a new era of intelligence.
The symposium’s theme, Bridging Foundations and Frontiers, captured this inflection point. As organizations modernize data platforms and adopt generative and agentic AI, the most advanced leaders are rediscovering that analytics is most powerful when it centers on problem definition, value creation, and culture.
From Highmark’s people-first transformation and Boeing’s safety-first governance to FordDirect’s context-engineering breakthroughs, the day showcased how analytics leaders are navigating this uneven terrain with rigor and imagination. At IIA, we remain committed to helping our clients cross that bridge, staying grounded in evidence, guided by experience, and connected through the collective intelligence of our community.