Skip to content

Making MDM Matter: Breakthrough Conversation with IIA Expert Brigitte Workman

In this blog series, Jason Larson, head of content at IIA, interviews IIA Experts who help clients navigate top data and analytics challenges within their enterprises. With over 150 active practitioners and unbiased industry experts, IIA’s expert community provides tailored support, plan validation, and ongoing guidance to drive key analytics outcomes.

In this Breakthrough Conversation, Brigitte Workman, IIA Expert and founder and principal consultant of ProviderMDM/DG. As one of the most sought-after master data management (MDM) advisors in U.S. healthcare, she warns against boiling the ocean when calculating ROI from your data catalog or MDM investments, shares the most effective ways to measure progress toward a shared language with data, and gives us a peek into how she sees the ROI of data catalogs and MDM evolving as AI finds its footing in enterprise analytics. Visit IIA’s resource hub on why analytics and AI value gets lost and how to fix it for more tools and expert insights.

Research and Advisory Network (RAN)

Join RAN and book a consultation with Brigitte Workman and other leading data and analytics practitioners for master data management guidance.

Where do most organizations go wrong when trying to calculate ROI from their data catalog or MDM investment?

I see a few common scenarios. Often, I’m brought into organizations to help with discovery—evaluating products and deciding where certain capabilities should live—or to help clean up after an initiative failed to deliver on expectations. In either case, leaders usually want hard numbers they can present when requesting funds. They’re asking for capital to launch or redo a program, so they need to demonstrate that the return on investment will exceed the cost.

One approach I see a lot is the classic “boiling the ocean.” Organizations try to measure ROI by conducting detailed time studies. They’ll track how long it takes individuals to perform every task: compiling, matching, cleansing, calculating, and validating data. Then they multiply that across the organization, using compensation rates to estimate a total cost.

On the surface, this sounds reasonable, but in practice it rarely produces meaningful results. After implementation, very few organizations revisit those estimates to see whether the changes reduced time and effort. Without validating the impact, those initial calculations don’t give you a reliable measure of ROI.

And oftentimes, by taking the boil-the-ocean approach, organizations spend so much time and so many resources trying to quantify these numbers that they burn through valuable time that could have been used simply implementing and building out those capabilities. Boiling the ocean is the first big trap: huge, laborious time studies and massive slide decks to justify a capability before anyone even starts working on it.

The other pattern I’m seeing more and more, especially in the last few years, is where individuals try to shortcut the process by leveraging the internet and AI tools. They use AI prompts to search for metrics that might quantify ROI on data governance, data catalog initiatives, or master data management programs. They gather the metrics disclosed online and then attempt to adapt them to their organization and industry.

Another common mistake is relying on generic ROI statistics they find online. Increasingly, I’m seeing organizations use AI tools to search for pre-defined metrics to plug into their business case. The problem is, unless you understand where those numbers come from, they can be misleading. Vendors often publish claims about potential ROI, but those figures aren’t necessarily grounded in your industry’s realities.

If you want credible numbers, you need to validate your sources. Given the fact that my background is predominantly in healthcare, I’d look to organizations like AMA, JAMA (an AMA publication), CMS, or other groups that publish reliable, industry-specific benchmarks. Pulling metrics from these trusted sources ensures your ROI calculations are defensible.

Beyond that, I encourage organizations to focus on their own key use cases. What problems are you trying to solve? What are the recurring complaints you hear from data consumers? Maybe people can’t find the data they need, don’t know who owns it, or don’t trust the quality. Grounding your ROI model in those pain points—paired with industry-backed benchmarks—produces a much stronger, more realistic business case than simply reusing numbers you found online.

That’s where you have the opportunity to drill into the specific metrics that actually matter. Start by focusing on the use cases you’re trying to solve. What would be the impact if we made changes in those areas? How would that move the needle on maturing our data management capabilities?

Whether you’re introducing a data catalog so people can shop for data in one place or implementing master data management to establish a single source of truth, the key is tying your calculations to those priority use cases. From there, you can begin building capabilities incrementally and putting trusted data into users’ hands, scaling adoption, and maturing the capabilities over time.

Taking this measured approach also makes it much easier to justify requests for capital investments with leadership. When you start small and stay focused, you avoid the trap of trying to boil the ocean and instead demonstrate clear, tangible value early on.

What's the most effective way to measure progress toward creating a shared language across the organization?

One approach I’ve used with clients is an onboarding study. Start by looking at a segment of employees who have been with the organization for about a year. At that point, they’ve had enough time to understand the organization’s inner workings, experience the pain points, and navigate the process of finding the right data and connecting with the right people.

You can assess how long it takes for them to become productive when it comes to working with data. Do they know where to find trusted sources? Do they know who owns the data? Do they understand the definitions behind key business terms? Comparing these insights before and after implementing a data catalog or MDM capability can give you a concrete way to measure the cultural ROI of “speaking the same language.”

So, survey employees who’ve been with the organization for about a year and ask them how much time they spend each week trying to locate data, identify owners, or figure out which source is trustworthy. Then, estimate how much time their newer teammates likely spend doing the same tasks.

From there, you can calculate the potential “time to value” savings if employees instead had access to a single source of truth—whether through a data catalog with intuitive search and familiar business keywords or by initiating a workflow with a data steward or owner. When you factor in organizational size, turnover rates, and the number of teams involved, even this single statistic can be eye-opening.

For large organizations or those with decentralized analytics functions where there isn’t one place to shop for data, the savings are often significant. It’s a straightforward metric leaders can quickly understand and relate to.

Another key measure is how well people across different roles can use a common language when searching for data. Take healthcare, for example. A clinician might search for “diabetes management” or “diabetes education” using clinical terms. Someone else working with patients in a non-clinical role, though, might simply search for “sugars” because that’s the common term in their region. A good data catalog bridges those differences, returning the same dashboards, reports, and contacts regardless of terminology.

This also applies to master data management. When certified data assets—like dashboards and reports—are published to the catalog, users can quickly find and trust what they need. That “certified” label signals quality and reliability, encouraging broader adoption.

Modern catalogs also give you usage insights. You can track who’s engaging with the catalog, which keywords are most common, and where usage is lagging. Those patterns help you improve search results and identify where extra support might be needed. For example, if a department’s engagement is low, you might offer short data-literacy sessions or quick tutorials to help them get started. Over time, this raises usage and builds confidence across the organization.

Another useful approach ties back to the survey. One of the questions we include focuses on tribal knowledge: how much of it employees felt they had to build up before they could reliably find the data they needed. The goal is to understand how dependent the organization still is on informal knowhow versus structured systems.

Ideally, you want to move toward a level of maturity where tribal knowledge is no longer required. People should be able to act independently, in a true selfservice capacity, finding what they need when they need it and in the way that works best for them.

You've alluded to this in the conversation, but how should leaders frame the ROI of MDM and data catalogs to executives who see them as infrastructure or overhead and not value drivers?

Leaders often know they need these capabilities, but they struggle to justify them to executives who see them as overhead. I like to explain it using the illustration of pipes and plumbing. Master data management and catalogs are not ends in themselves. They are infrastructure that enables everything else.

One of the clearest examples right now is AI. Organizations are adopting AI to produce results faster, automate workflows, and shorten turnaround times. For any of that to work, the underlying data must be accurate and trustworthy. If it isn’t, the AI will generate results that are not just slightly off but completely wrong, and those errors will spread quickly across the organization.

And so in order to stay competitive, organizations need the underlying capabilities that make AI successful. MDM provides the foundation for trusted, consistent data flows into AI-enabled systems. Companies that have already invested in MDM are gaining a competitive edge, while those without it are starting to fall behind.

Industries are moving at unprecedented speeds, and AI is increasingly embedded into nearly every platform, application, and interface. Without solid data governance, catalogs, and master data in place, organizations risk being unable to capitalize on AI-driven efficiencies and insights.

What signals should a company look for to know they’re ready to invest in evolving their catalog and data infrastructure, whether they’re just starting out or maturing their capabilities?

If we’re talking specifically about catalogs, there are a couple of clear signals. One is when an organization wants to offer federated analytics. You’ve got different departments managing their own data sources, but people still need to work across silos to get insights. That’s when you need a single place where everyone can “shop” for data. In these cases, each federated group should contribute to a centralized catalog or data library.

This becomes even more critical in organizations growing through mergers and acquisitions. When you’re consolidating—or even deciding not to consolidate—data from different sources, you need to catalog it so teams can quickly find what they need. Without a catalog, pulling reports across multiple organizations becomes slow and inconsistent. A well-maintained catalog makes it possible to continue executing your M&A strategy without slowing down day-to-day operations or delaying the next deal.

What’s a good example of a data catalog that really makes “shopping for data” work?

It depends a lot on the size of the organization and what they’re trying to solve for. Some companies go with very lightweight catalogs. Those might just have a few basic capabilities, like a business glossary to help people understand definitions and context, a list of data owners and stewards, and maybe the ability to scan and flag sensitive or proprietary data so you can control access. For smaller needs, those basics are often enough, and the price point makes sense.

But if you’re trying to bring a lot of people together into one place to find and use data, you need more. The stronger catalogs make it easier to search using familiar language. I might type in “diabetes” while someone else searches for “sugars,” and we’ll both end up looking at the same set of relevant assets. Those assets could include dashboards, data dictionaries, business glossaries, visual diagrams showing data flows—even a central library of APIs.

The better catalogs also make it easy to see which dashboards are certified and trusted, request access to data right from the interface, and track those requests automatically without jumping to another system. And when there’s a data quality issue, the catalog flags it, shows you if it’s already being worked on, and prevents duplicate tickets from piling up.

That’s when it starts to feel seamless: one place where people can go, find what they need, trust what they’re looking at, and move on with their work.

Beyond ROI: A Practical Guide to Communicating Analytics Value

Take a practical approach to proving ROI on analytics—read our expert designed eBook for a step-by-step guide on communicating the value of your projects.

You’ve talked quite a bit about AI already, but with AI accelerating and democratizing data insights, where do you see the relationship between ROI and data catalogs evolving over the next two to three years?

We’re living in an incredible time. Every kind of application and capability you touch right now is accelerating at a staggering pace. It’s like the whole industry has caught a wave of creativity.

For those of us in data management, that creativity is fueling new ideas about how AI can be integrated into the products we manage. And it’s not just practitioners—vendors are moving just as fast. Whether you’re talking about data governance catalogs or master data management platforms, there isn’t a single vendor out there that doesn’t already have AI capabilities in their general releases or on their near-term roadmap.

That acceleration brings a huge opportunity, but also a need for discipline. While I’m enthusiastic about what AI can do, I’m also cautious. Organizations need to think carefully about how they introduce AI capabilities, especially because regulatory and legal considerations vary depending on where you operate. If you’re adopting AI-driven features through your vendors, you need to understand how those vendors are managing risk, data protection, and compliance—because whatever they do ultimately flows into your environment.

Bottom line, if you have a data governance program, AI governance needs to become part of it. Make sure you have clear responsibilities, accountabilities, and processes for vetting AI capabilities before they enter your ecosystem. That’s the only way to move fast without inadvertently introducing risk.