
Over the past several weeks, we’ve unpacked IIA’s Returned Business Value (RBV) framework. We developed this framework in response to a perennial theme in our advisory work with clients: the difficulty in measuring and communicating the value of analytics and AI initiatives when they don’t map to traditional finance models. We’ve explored how to assess strategic and commercial value. Now, let’s turn our attention to the third pillar of the RBV framework: risk.

Beyond ROI: A Practical Guide to Communicating Analytics Value
Take a practical approach to proving ROI on analytics—read our expert designed eBook for a step-by-step guide on communicating the value of your projects.
When we think of analytics risk we cannot approach it as the same as IT risk. If I’m spending money on a new data cataloging technology or I’m going to put some more money against a cloud service, the risks I’m incurring are traditionally known risks and can be reasonably mitigated. Analytics risk can include traditional IT risk but is more impacted by “Can we actually figure this out?”
There are at least three classes of risk in any analytics effort. The first is human capital risks: skills deficits, including the skills required to define-design-test models, methods and algorithms. The way we deal with that in practice is a rigorous experiment-prototype-pilot-production cycle. We experiment before we prototype, we prototype before we pilot, and we pilot before we take anything into production.
In addition, we try not to incur first-order acquisition and implementation costs until we believe we have a successful pilot on our hands, reducing risk once we have decided to invest resources. Experimentation is cheap. Fail fast, fail cheaply.
The second class of risk is technology risk. Adding S3 capacity on AWS is a low-risk endeavor. However, when you go out to select anything new—a new technology component, a new vendor— there is actually a significant risk that the technology or vendor you select is not going to perform as expected. Don’t confuse selling with installing, as they say in the technology industry. We have in the analytics and AI space the interesting problem of having a very dynamic or volatile supply-side marketplace where vendors can and do pivot, change what they provide, or exit the building altogether.
Alternatively, the technology might work as we expect and the supplier might remain the supplier we expect, but the technology might not be able to be integrated, at reasonable cost or at all, into the rest of our infrastructure. We likely won’t know for sure until we’ve paid the upfront acquisition and implementation costs, moved into production, and discovered the higher-than-expected integration costs. Due diligence in acquisition can help. Good contracts can help. But the risks in this area remain.
The last risk class relates to all other implementation risks. Once you go through your human capital risks and your technology risks, what are all the other implementation risks that could cause the project to be delayed? What happens if the development model is flawed, or the enterprise’s internal organization changes?
The exercise of listing and quantifying risks is intended, ultimately, to yield a risk register where each establishing risk is rated on a scale from zero (0)—will never occur—to one (1). Risks rated a 1 will, definitely, accrue.
Of course, if you can characterize any risk, you can also develop contingency plans for handling that risk, when it accrues. And that is definitely important, when doing RBV analysis.
Your project risk is either the mean or the average—as you see fit—of the individual risks in the risk register.
Once you know that aggregate risk score, you can apply the standard formula to get a risk-adjusted project value: (Benefits-Costs) * (1-Risk)
For example, if you expect to get $1 million in benefits from a project that costs $200,000 to reach production status, with an aggregate risk factor of .25 (a lower risk project), subtract cost from value (yielding $800,000), and multiply that result by .75 (1.0-0.25) to yield a risk-adjusted net commercial value of $600,000. That risk-adjusted value can then be stack-ranked against other projects, with or without factoring in your strategic value ratings.
This value is just shorthand: the bumper sticker version of the RBV analysis. The real value in any RBV framework is in having the risks registered and having developed contingency plans so that you know what you are going to do when those risks occur, as opposed to simply being caught flat-footed late in the game when a risk that you hadn’t even spent much time thinking about actually accrues.
People who deploy any new methodology are going to make mistakes. That’s part of learning. Mistakes are good. Make them often, make them quickly, don’t make the same one twice. But there are failure modes in these kinds of exercises to be aware of and avoid.
The first failure mode is: mistaking spreadsheets for socialization. “We need to do an ROI on this project.” “Okay, how do we do ROI?” “Here’s our spreadsheet. Just fill it in. You might need to talk to people, in order to do that.” It’s almost certain that these sorts of spreadsheet exercises are going to require you to assign value, assign cost and possibly assign risk, although most ROI calculators don’t have a risk factor in them. If you haven’t talked to the authorities on both the value and the cost side, if you haven’t socialized the project, the hypothesis around value, and the hypotheses around cost, then no matter how well you fill out those spreadsheets, you have failed to socialize the project with the people who are affected by, or invested in it, and your project is at significant risk of failure as result. The goal of a good RBV exercise is socialization and commitment. Spreadsheets are just artifacts of that process.
The second failure mode is: clean costs, no owned benefits. You have carefully characterized your costs, but nobody on the demand side of the organization will either quantify or own the value side of the equation. Without that level of ownership, projects don’t get funded. When the vice president of sales says, “I know this is going to transform our direct sales operation and improve our percentage of sales reps who attain quota from 55 percent to 68 percent,” that soft organizational power can move a lot of hard, practical mountains.
The third big failure mode is a classic one: overcommit and underdeliver. People understate the cost, and overstate the value. And then the project actually overcosts and underdelivers, which is common. That effect immediately compromises the project sponsor, and the project delivery team, as bad bet-makers. “Everybody knows that Doug talks a good game when he’s trying to get a project funded, but they’re never as interesting in production as he says they will be when he’s looking for money, and they always take twice as long and cost twice as much.” And very quickly, those people get calibrated, and that’s just not a good place for Doug to be.
The fourth failure mode is: the no-risk project. There is no such thing as a no-risk project. All projects have risks. All the statistics we have for analytics and AI projects and technologies say that a very high percentage of them don’t finish at all, or do not deliver the value projected, or cost more than budgeted. Or some combination of those. Any ROI model that doesn’t say, “Here are the project risks, here is the likelihood that this risk is going to accrue, and here is our mitigation plan if it does accrue,” is putting the project sponsor and the project delivery team in jeopardy by not characterizing and advertising risks upfront. In our experience, senior leaders are actually understanding and supportive when forecasted risks accrue—and not when unforecasted risks arise. If you show up at the executive table a month before the project is due and say, “You know what, we’ve got a problem. Supplier A’s product does not work the way the data sheet said it was going to work, and it’s going to take us another six months and another $250,000 to bring that home,” the first question asked will certainly be: “How come we didn’t foresee this?”
The fifth failure mode is: objective conflict. In this model, multiple projects target the same outcome—for example, a 30‑day reduction in sales cycle time—using different approaches (e.g., technology enablement versus retraining). These efforts often collide, making it hard to attribute positive results and easy to assign blame to the “other” team when expected outcomes don’t materialize. All advanced analytics and AI projects should ensure that their work is the only work targeting a specific objective or metric, and that the influences on those metrics are well understood and well characterized. Reducing cost-of-goods-sold at a time when input prices are being forced up by government tariffs, for example, makes for difficult measurement, and more difficult explanations.
The last failure mode worth pointing out, where you’re both the project sponsor and the analytics leader is: thoughtless compliance. Someone brings you an ROI model and says, “Just fill this out for me. It’s part of our standard governance process. I need you to complete this.” Wherever that ROI model came from, it’s probably not particularly applicable to the unique complexities of advanced analytics and AI projects. Instead of rushing ahead, first understand why your enterprise wants the ROI, and whether the model can be adapted to better fit advanced analytics projects. Because often ROI requests are less about measuring impact and more about creating barriers to getting projects approved.

Why Analytics Value Gets Lost—and How to Fix It
Explore our all-in-one resource hub for analytics ROI. Don’t let outdated ROI math understate your impact. Check out our smarter tools and frameworks for measuring value in analytics and AI—aligned to strategy, outcomes, and real enterprise conditions.