Skip to content

A Wise Crowd: Good Questions and (Hopefully) Good Answers to our 2021 Predictions and Priorities Webinar

Thanks to all who joined IIA’s annual Prediction and Priorities led by Bill Franks with co-panelists, Kathleen Maley, Eric Siegal and Drew Smith. If you’d like to give it another, or a first look, please go here to register for the recorded version.

As is always the case in our webinars, we had some great questions from some of the brilliant members of the IIA Community. Regrettably this time we could not answer all of them live that day, so please see a summary of many of the questions as well as some responses below. To be clear these responses are mine and I open to floor and invite disagreement from my co-panelists and from you.

Prediction and Priority:

Data Literacy Takes Center Stage, Design Your Data Literacy Program For Success

Question: People often confuse a data literacy strategy with a data strategy. How should we think about its differences and similarities?

Answer: Generally speaking, we don’t hear this confusion often, even though both are very hot topics right now. Some key differences would be the level of detail in a data strategy might be greater since its core purpose is to improve the availability, timeliness and quality of data, in that order of priority. It also means that audience for a data strategy likely skews toward the technical and analytical side. In contrast a Data Literacy endeavor is far broader, and while it should be well designed, it’s likely much less detailed. We do see both as critical components to those looking to improve their analytics maturity which requires data mastery and leadership, two of the “5 Areas of Differentiation of Analytics Leaders”, on which we have done a webinar and published an e-book.

To support further understanding of the two topics, check out these two resources – a Data Literacy Webinar with Valerie Logan of the Data Lodge, and IIA’s own Data Strategy E-Book.

Question: Could one shared language be to use Microsoft Excel as a "starter" for analytics? Please don't cringe on this question too much but I find so many financial professionals think they are not data literate when they just need an easy tool that they have experience with to get closer to understanding data management, visualization, and even some automation with macros.

Answer: No cringe whatsoever! As mentioned above, one of the leading authorities on Data Literacy and an IIA Expert, Valerie Logan of The Data Lodge helps companies improve their data literacy with a very active approach. And while I am not sure how she feels about Excel, she and I have discussed the value of “meeting folks where they are”, so if Excel is comfortable for folks, then adding data literacy to that make sense. Another one of the panelists, Eric even uses Excel in his Coursera course, Machine Learning for Everyone where a predictive model in Excel is used to help anchor fairly sophisticated concepts.

Question: It's so much more than having 'business goals'! That's been around since CRISP-DM in the '90s. There's so much _socially_ that's missing. Attitudes from the top (which comes from executive literacy); addressing organizational culture; overcoming resistance; intentional collaboration across lanes; beyond deployment - addressing true adoption, etc, etc. If these soft issues aren't addressed, otherwise technically valid solutions will just die on the vine. Organizations will not be able to industrialize analytics until they conquer more strategic and cultural issues. Agree?

Answer: Yup! We have not only had the Data Literacy webinar mentioned above with the wider IIA Community, but also had several discussions with analytics executives in our Analytics Leadership Consortium and the richness of the challenges mentioned above come through in these discussions. If it were only as easy as “technically valid solutions” it’s likely IIA would not exist. It’s often said, “it’s the soft stuff that’s hard.” And while I won’t speak for the presenter of this section, Kathleen Maley, I will share that in my discussions with her and based on her impressive track record, she also understands well the complexity of the challenges and what it takes to overcome them. Check her out here on the Data Strategy Show with Samir Sharma for a sample of what I mean.

Question: What is the role of corporate learning vs university or other providers in data literacy?

Answer: We might now, in the world of data literacy programs, be suffering from the paradox of choice. Many large firms have immense capability in their Learning and Development, there are focused providers like The Data Lodge mentioned above, there are corporate accounts from MOOC, and there are extremely well-done programs at universities, like this one at The University of North Carolina, Charlotte. Large scale data literacy programs are still relatively rare so it’s hard to say that one is better than the other, but I do see one clear pattern – those who are successful, do not go it alone. I encourage teams to roughly frame up their ambitions around data literacy in terms of how wide and deep in the organization they want to go and how much they can manage internally and then reach out to universities and externals to find a fit.

Question: The real challenge is the gap of knowledge about the benefits of analytics at the C-level in the organizations, how do you solve that?

Answer: The best data literacy programs are audience specific, meaning they recognize that there is no one size fits all curriculum. What the C-suite needs to know about the benefits (and costs) of growth in analytics capability and maturity is for sure critical, and so it the impact of data quality and acting on data driven insights for front line coworkers. In addition to checking out the webinar from Valerie Logan mentioned above, this blog from IIA co-worker Lise Massey, shares some key concepts in this area.

Prediction and Priority

Model Interaction and Exploration Come to the Forefront, Integrate Design and User Interface Principles with Data Science

Question: If we have a data scientist and a software developer, is there another role about User Experience design? Model adoption can change dramatically based on the "user experience" that is designed around it. Who should lead this component of data science practices?

Answer: This question points to an area we debated including in this year’s Predictions and Priorities – Analytics as a Product. I have written a blog on this here and hosted a webinar as well. In both the blog and the webinar, we touch on seem key notions, the first of which is the importance of focus on the user experience. Just acknowledging a user is out there, is, for some teams, a big step. The second key notion is that, even if an analytics team does not have dedicated UX professional, someone should take that role. It’s often an additional task so it should be someone with the bandwidth and a close connection to the end user. That could be an analytics translator, analytics catalyst or business analyst. Usually the time and the talent (an often the inclination) does not sit with the data scientist or the leads in the modelling team.

Prediction and Priority

Adversarial Attacks Strike a Blow, Take Action to Mitigate Adversarial Attacks

Question: Which are the adversaries in the adversarial attack?

Answer: Unfortunately, this is not like The Queen’s Gambit, your adversary will not be as obvious as Vasily Borgov or Beth Harmon. Fortunately, the answers to the next two questions will help you be prepared for the unknown adversaries.


Question: Since deep learning models will come under attack, can data scientists build self-monitoring into the models that could self-identify some of these potential attacks (e.g., data out of bounds with the model expectations)? (Realize this won’t cover all potential attacks).

Answer: There are several methods to detect adversarial attacks and/or render models that are more robust against such attacks. The method you sketched here sounds a bit like anomaly detection, which is a general method for analytical cyber-attack detection (i.e., even when it isn't ML that's the target of attack). For an overview, see https://www.youtube.com/watch?v=Kz7AjZjVX-c


Question: Can this be mitigated best by a better, perhaps AI/DL enabled, testing regime before roll-out? E.g., 737 Max. I’ve heard this called “destroy your business” (DYB) strategizing in an earlier life.

Answer: There are several methods to detect adversarial attacks and/or render models that are more robust against such attacks. For an overview, see https://www.youtube.com/watch?v=Kz7AjZjVX-c

Prediction and Priority

Centralized Organizational Models Lose Their Luster, Consider If Your Organizational Model Needs A Tune Up

Question: On Organizational Models: neither fully centralized nor fully distributed is good. Should be a federated with more centralized early, then federated with more distributed as overall competency is established. Agree?

Answer: Yup! This is very close to the sub-thesis of the prediction. Leading companies, like Amazon, have established solid core capabilities in data management and (clearly) the accompanying technology. They also have long established data decision making, so analytics capability is seamlessly connected to the business decision makers. As more and more legacy companies develop widespread and deep capability in data driven decision-making they should move away from a centralized approach. The concern, that we have observed, is that some firms have very unequal data driven decision making (from mediocre to terrible) and still have very poor data management capabilities and a disconnected infrastructure but are moving away from centralization. They may be frustrated at a lack of movement in analytics results (perceived or real) or have very little understanding of how data is managed, reflecting generally low knowledge of how data actually works to drive analytics. These firms should not move from a centralized approach.

Question: A less central, more federated/democratized analytics organizational structure/community would foster better data literacy and broader data-driven culture. Thoughts?

Answer: We see that these things need to be nurtured regardless of the structure. And certainly, if centralized = hoarding of resources, knowledge, or data itself, then maybe a decentralized approach would work. That being said if the central analytics organization is actively inhibiting a broader spread of knowledge, that in and of itself is a major problem. It’s also the case that sometimes a strong centralized organization can serve as an excuse for functions to underinvest in what is now a critical capability in all functions. The more decentralized the organization the more effort should be made to create ties that bind and one way, for example is to create a Community of Practice, which Lise Massey of IIA wrote about here.

Question: Who is the mediator when modeling determines a need for a business operational change, but that change may sit outside their organization which can drive finger-pointing or turf wars?

Answer: This situation arises in both central organizations, where a theoretically neutral analytics organization sees an opportunity in a functional business line, as well as in federated organizations, where for example, marketing and product might see an opportunity in each others’ business lines, so I’m afraid I don’t have a magic bullet here. We do see that firms with greater maturity do come to “ground truth” more quickly about the size and importance of the opportunity because they have an equally high understanding of how analytics works, and trust in the data is high. Hopefully this creates even more incentive to improve data literacy and analytics maturity, after all internal finger pointing is a waste of precious energy.

Question: What do you recommend as a data scientist should we focus to work in a Center of Excellence or in functional unit?

Answer: Assuming the functional unit exists in a company of good to excellent broader data capability (see first question and answer in this section), it can be a really great experience for Data Scientists to work in functional areas. In these companies, the functions are themselves quite data smart, so your coworkers will be better able to convey what problems they need your help solving, which can reduce iterations and increase the value of your work. The key advantages to working in a central function is the increased interaction with other data scientists and the diversity of analytics products to which you are exposed.

Question: Data scientists are comfortable working with 'messy data'. They strongly resist working with 'messy humans' - and if not the data scientist, _someone_ needs to address cultural issues. :^} Agreed?

Answer: Nope, don’t agree! All humans (including data scientists) are messy and data scientists who avoid the learning and development gained by working across lines are short-changing themselves and the organization by hoarding their knowledge. And their peers in the business and elsewhere who don’t work to understand (to some level) the new knowledge and talent that data scientist have are doing the same. There is no “someone” in this sense, there is only “everyone”. It takes two to tango, and a lot more to ensure that analytics delivers value to enterprises. The result of keeping functions walled off is the development of data science solutions that have value only in the abstract, which is a component to my new blog series, the first of which can be found here.

Question: The pain points raised in these three predictions so far stem from "short termism". Data driven digital transformation, to be successful, requires longer time horizons than most organizations (and business decision leaders) are allowing it, leading to more failures with the burden of that falling on the data and analytics leaders. How do we address this?

Answer: A first step to fixing the problem of short termism, is to admit we have a problem, so the question gets us off to a good start. As an industry and as a profession we do indeed look too often for quick and easy answers. We have done so when we made it seem as if standing up a large Hadoop cluster would solve our analytics challenges, or we could do so by hiring an army of data scientists, or by investing in a large scale “self service solutions”. The challenge of enterprise analytics is a large one made up many interconnected parts including, changing how people think (data first) to the language the share (data literacy) accurately assessing difficult to grasp concepts (artificial intelligence) to selecting the right fast-moving technologies. As analytics leaders we need to start by not underselling the size of the change and stop overselling the promise of the next big thing.

(Not Even Close To) Final Thoughts

Since there is a bit of inference around the questions built into the answers please accept my apology if I misunderstood your question. If you disagree with the answers that’s great! As Mark Twain said, “it’s the difference of opinions that makes horse races.” Feel free to reach out via email dsmith@iianalytics.com or via social linkedin.com/in/andrewjsmithknownasdrew.

Thanks, and best wishes for a great 2021!


Bill Franks

Bill Franks is IIA’s Chief Analytics Officer, where he provides perspective on trends in the analytics and big data space and helps clients understand how IIA can support their efforts and improve analytics performance. His focus is on translating complex analytics into terms that business users can understand and working with organizations to implement their analytics effectively. His work has spanned many industries for companies ranging from Fortune 100 companies to small non-profits.

Franks is the author of the book Taming The Big Data Tidal Wave (John Wiley & Sons, Inc., April, 2012). In the book, he applies his two decades of experience working with clients on large-scale analytics initiatives to outline what it takes to succeed in today’s world of big data and analytics. Franks’ second book The Analytics Revolution (John Wiley & Sons, Inc., September, 2014) lays out how to move beyond using analytics to find important insights in data (both big and small) and into operationalizing those insights at scale to truly impact a business. He is an active speaker who has presented at dozens of events in recent years. His blog, Analytics Matters, addresses the transformation required to make analytics a core component of business decisions.

Franks earned a Bachelor’s degree in Applied Statistics from Virginia Tech and a Master’s degree in Applied Statistics from North Carolina State University. More information is available at www.bill-franks.com.

Kathleen Maley

Kathleen Maley specializes in making data and analytics work for an organization. After 12 years at Bank of America as an analytics executive, Kathleen was recruited to lead the enterprise Consumer and Digital Analytics organization at a regional bank in the Midwest. Core to her strategy and operating framework were elevating the role of analytics from data provider to strategic partner, engineering a shift toward collaboration, and establishing a discipline for tracking value. Kathleen is a published writer and frequent speaker. She holds a BA in Mathematics and an MA in Applied Statistics.

Eric Siegel

Eric Siegel, Ph.D., is a leading consultant and former Columbia University professor who makes machine learning understandable and captivating. He is the founder of the long-running Predictive Analytics World and the Deep Learning World conference series, which have served more than 17,000 attendees since 2009, the instructor of the end-to-end, business-oriented Coursera specialization “Machine learning for Everyone”, a popular speaker who’s been commissioned for more than 100 keynote addresses, and executive editor of The Machine Learning Times. He authored the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, which has been used in courses at more than 35 universities, and he won teaching awards when he was a professor at Columbia University, where he sang educational songs to his students. Eric also publishes op-eds on analytics and social justice.

Drew Smith

With close to 20 years of experience, Drew has worked on both the business side of analytics, leveraging insights for business performance, and on the delivery side of analytics driving the use of enterprise analytics. As the Executive Director of IIA’s Analytics Leadership Consortium he engages with analytics thought leaders and top analytics practitioners in the IIA Community to deliver impactful meetings and valuable content, helping to keep analytics leaders in the lead.

Before joining IIA, he led the data analytics and governance team at IKEA’s global headquarters in Europe. He made heavy use of analytics in various leadership roles across the IKEA value chain in both the United States and Europe. He credits his business success largely to his early and avid adoption of analytics to help drive his decisions and is passionate about helping other organizations do the same. He received his MBA from Penn State and his undergraduate degree from Boston University.