Skip to content

Mapping Your Analytics Operating Model, Part 2

In Part 1 of this series, we explored the benefits of a better operating model and the components of IIA’s framework to map it. For large, complex organizations, IIA advocates for a strategic structuring of analytics within organizations to enhance outcomes and mapping analytics operations to tackle inconsistencies in analytics product quality and delivery.

As we discussed, the benefits of a refined operating model include stronger cross-departmental collaboration, increased relevance and value of analytics outputs, better model selection and application, and more successful transitions from conceptual models to deployed solutions. The mapping framework itself is divided into five components—Opportunity, Data, Models, Deploy/Fix/Kill, and Manage—each playing a crucial role in streamlining the analytics process and ensuring that analytics operations contribute meaningfully to a company's efficiency and effectiveness. We’ve seen clients who engage in this process uncover actionable insights, drive continuous improvement, and improve strategically alignment with business goals.

Now, we’ll discuss a scorecard that can be used to evaluate your operating model and an example of this evaluation in action.

IIA RAN clients have access to the full analytics operating model framework and supplemental resources here.

Mapping Your Analytics Operating Model eBook

Operating models are like diets. This is true of analytics operating models as well. Every firm that delivers analytics has an operating model, but few take the time to examine and improve theirs.

The Framework Scorecard: Map Your Analytics Operating Model Through Use Cases

To assess your operating model, you will do a post-mortem on several analytic products (or POCS, or projects), even if those products are still live.

  • Gather at least 5 analytics projects/product of moderate scale and complexity—at least one or two successes, at least one failure, one in the middle where it’s not clear or not agreed.
  • Try to cover different functional areas and ensure that the data used is wide as well, ideally covering structured and unstructured data.
  • Finally, try to select a majority of use cases that are in your future technical landscape since you’re aiming to examine the past for the purpose of being better in the future. For example, if your analytics are more likely to be in the cloud than on premises, prioritize those cases that are cloud based. The same would apply if you’re moving toward more open-source tools over licensed providers.

Using the framework below, answer the questions for each of the analytics products. The goal is to get a rounded view of the analytics product across subjective metrics. Subjective metrics are suitable because, as mentioned earlier, the discussion will reveal areas of agreement and disagreement which is alone valuable. Additionally, after you have scored all the use cases together, the larger group discussion comparing the results that each use case has will reveal critical patterns that you can address in roadmaps and future strategies.

In the process of refining an analytics operating model, two types of questions are pivotal. The first, open-ended, facilitates dialogue among analytics teams, technology partners, and business stakeholders, aiming to establish consensus on the operating model and identify points of contention. The second set, structured with the ability to be scored, allows for quantitative comparison both within and across various use cases. This dual approach enables organizations to pinpoint and prioritize specific improvements that will significantly enhance the operating model's overall effectiveness and efficiency.

IIA RAN clients have access to the full analytics operating model framework, including the complete list of evaluative questions, more use cases, and data visualizations here.

Opportunities: The sourcing of the right opportunities sets the stage for an efficient and effective model.

  • Who was involved in leading the discussion and decision to pursue this opportunity? (open)
  • How was the decision to move this opportunity to an analytical test model made and communicated? (open)
  • The right persons were involved and leading the discussion and decision to pursue this opportunity. (no / mostly no / mostly yes / definite yes)
  • It was clear to all stakeholders how the decision to move this opportunity to an analytical test model was made. (no / mostly no / mostly yes / definite yes)

Data: Data access and trust in data is the biggest challenge most firms face; the key to fixing it is focusing where the failures cause real problems.

  • What data was used in the analytics product, and from where was it sourced? (open)
  • How easy was it to access this data? What barriers to access did the team face? (open)
  • It was easy to access the data needed throughout the development of the analytics product (no / mostly no / mostly yes / definite yes)
  • The team could assess the quality of the data enough to judge its trustworthiness (no / mostly no / mostly yes / definite yes)

Models: Matching the right model to the analytics products seems straightforward to some, but it’s not uncommon for some teams to wildly “experiment” while other teams don’t leave the “tried and true.” In reality the best path is somewhere in the middle.

  • What model was used to address this opportunity and why was it chosen? (open)
  • What other models were considered and why were they not chosen? (open)
  • We used an existing or very similar model to ones used in the past. (no / mostly no / mostly yes / definite yes)
  • We evaluated other models, including custom models. (no / mostly no / mostly yes / definite yes)

Deploy/Fix/Kill: It’s well known that many analytics projects never make it to production, so the discovery of the meta pattern as to why is essential and the effort here will make a massive difference in your analytics effectiveness.

  • Who was involved in the decision to deploy, fix, or kill the analytics product? (open)
  • Was the time and effort needed to handover from the development team judged to be reasonable? (open)
  • The process and method to decide whether to deploy, fix, or kill was well understood by all involved. (no / mostly no / mostly yes / definite yes)
  • There was good coordination between the teams that developed and the teams that deployed (no / mostly no / mostly yes / definite yes)

Manage: Even if it’s not the same team or function that manages the models day-to-day, it’s the same company, so the cost and resource impact of managing models is only justified if effective models that are used for business impact remain in deployment.

  • What is the general maturity of your process for managing models? (open)
  • How are the decisions weighing the cost of maintaining the model considered against the business benefit? (open)
  • The responsibilities for development, deploying, and managing models are clearly understood by all who need to understand. (no / mostly no / mostly yes / definite yes)
  • There is an agreement and method to judge a model’s cost against its business benefit. (no / mostly no / mostly yes / definite yes)

Analytic Product Scoring: Use Case Example

Scoring and visualizing multiple models allows you to step back and see broad themes. Once you see these themes more clearly you can address them specifically and set the principles and culture for a more effective and efficient model going forward. Here’s an example of this framework in action with a cash lane predictor use case. The client-only resource includes data visualizations of the scoring.

Analytics Product: Cash Lane Queue Predictor

Description: Predictive engine that alerts store management teams that queues will exceed the set threshold; prediction must be done in enough time for managers to react and activate new cash lanes or initiate other actions to mitigate queues. The product was deemed to be a success because it reduced wait times by 50% in stores that deployed, which exceeded any previous attempt to reduce wait times, including self-service cash lanes.

Opportunity

“Waiting Times” are one of the top three drivers of dissatisfaction among customers so reducing wait times is a long-term strategic objective. The global customer service manager together with a customer service manager of a high-volume store were involved throughout and analysts and data scientists from a global COE led the data collection and modelling. The biggest miss was the lack of early involvement from the mobile applications team of the IT infrastructure team. This caused frustration and delays when it came to deploying the model in production.

Data

It was easy to access historical data on a global and store level, as well as store sensor data for the test store. It was later discovered that sensor data for some stores are less reliable. There were ambitions to integrate weather and local event data (e.g., concerts, sporting events). Later the team found that road construction and transit data was essential in some locations. Because of the open APIs for weather and transit data, these were easy to integrate to the model later and did improve prediction accuracy. Local event data was mostly unstructured, and the team was not able to build this into the model.

Model

A fairly standard predictive forecasting model with a threshold set by the business team was used. One from GitHub was adapted. A gradient boost model was considered but quickly established to be too complex for the problem.

Deploy/Fix/Kill

There were a few critical areas to be fixed before deployment. Firstly, the late engagement of the mobile applications team pushed the team to pursue a simpler than expected application. While early on some team members wanted a mobile dashboard with live tracking, the mobile applications team could only support a simple push notification. In the end most team members, especially the store management team, found this acceptable since all they really needed was an accurate prediction in enough time to act. As mentioned, the late addition of transit data caused a small delay.

Manage

Since the product output was simple (push notifications), the data mostly structured or in an open API, the model was fairly easy to manage. What the team could not anticipate was how the model would respond when store sensors were moved due to a remodel. The global customer manager together with the customer analytics team lead were responsible to track the performance and find a remodeling store to test the impact of moved sensors.

Analytics and AI Strategy Guide

Unfortunately, even in 2023, we're still seeing analytics leaders struggle to deliver measurable returns on analytics investments. To deliver value on your analytics and AI efforts, D&A leaders must start with a robust strategy, and we can help you get started.

Final Thoughts

The reason to be this deliberate in the mapping of your operating model is to improve that model—to make it more efficient and more effective. The mapping reveals patterns, like the lack of inclusion in decision makers early on or an over fascination with building custom models. And it can even be used to kill myths, like “our data quality is the reason we can’t deliver analytics.”

In companies where the analytics team finds itself constantly battling an entrenched IT system with data locked down, this approach can make clearer the business opportunities that are being held back. Or if the organization has one or a few engaged business users whose models are developed and deployed more quickly, it can show how essential that opportunity component is. This approach allows shifting from data governance focused on control to one aimed at delivering business value when data is known to exist but cannot be located or verified.

Usually, these problems are known to a few in the organization but the discussion of them often becomes more opinion than perspective with each case being viewed as somehow unique. Through actively discussing several opportunities in the same way and visualizing that discussion the patterns tell the story and the perspectives can change.