Skip to content

84.51° Builds a Machine Learning Machine for Kroger

Machine learning is a great way to extract maximum predictive or categorization value from a large volume of structured data. The idea (at least for “supervised learning,” by far the most common type in business) is to train a model on a one set of labeled data and then use the resulting models to make predictions or classifications on data where we don’t know the outcome. The approach works well in concept, but it can be labor-intensive to develop and deploy the models.

One company, however, is rapidly developing a “machine learning machine” that can build and deploy very large numbers of models with relatively little human intervention. You may have heard of dunnhumby, a UK-based analytics company that’s owned by the big retailer Tesco. dunnhumby had a US joint venture with Kroger named dunnhumbyUSA. In 2015 Kroger purchased dunnhumbyUSA and named it 84.51°. The name coincides with the location of its Cincinnati headquarters and is a tribute to the longitudinal analytics the company employs. But 84.51° also analyzes data like a finely-tuned machine.

The current approach to machine learning at 84.51° emerged from an initiative called “Embedded Machine Learning” (EML). Scott Crawford leads the initiative, but it had multiple progenitors. 84.51°’s Chief Operations Officer, Milen Mahadevan, is a champion for automation of processes and products within the organization. 84.51°’s Shop, a custom-built BI platform that allows CPG customers to pull detailed reports about shopping behavior, is a successful example of BI automation. Shop evolved to replace and extend what had been ad hoc reporting. Embedding machine learning is the logical progression from ad hoc modeling/segmentation to automated processes that generate value through efficiency and improved accuracy.

Another champion for the EML mission was the project’s executive sponsor and 84.51° Chief Science Officer, Paul Helman. Helman is one of the creators of the company’s item-level, sales forecasting product. Most machine learning models are static models created from a single training data set. Sales forecasting for Kroger evolves its model on a nightly basis based on the most recent data. As such, Helman and his team were early movers in embedding machine learning at Kroger. He recognized that “adaptive estimators” are important for efficiently modeling complex human behaviors such as shopping preferences.

These successful prior efforts led a small group of analysts and one VP to propose a project to embrace open source implementations of advanced methods. That project was endorsed by the senior managers of 84.51° and eventually grew into EML—a formal mission to enable, empower, and engage the organization to better use and embed machine learning. “Enable” meant providing the infrastructure to efficiently use and embed machine learning such as the servers, software, and data connectivity. “Empower” involved identification of the best set of machine learning tools and training analysts and data scientists to use those tools. After evaluating more than 50 tools, 84.51° selected R, Python, and Julia as its preferred machine learning languages, and DataRobot as its primary automated machine learning software provider. “Engage” meant motivating internal clients to use the tools by demonstrating and socializing the benefits through several proofs of concept, advancing code sharing/examples (via Github), and consulting.

Another part of the EML initiative was to develop a standard methodology for machine learning use. Its internally-developed methodology, which it calls 8PML (84.51° Process for Machine Learning), is unusual within non-vendor organizations. Crawford says that it borrows heavily from several publicly available data mining processes such as CRISP-DM and ASUM-DM, but was customized to better fit 84.51° specific use cases and environments.

Most machine learning effort in companies is focused on development of models, but 84.51° was interested in a broader focus. 8PML begins with the “Solution Engineering” phase, in which the analysis is framed, and the business objectives for the project are clarified and compared to available resources. For example, a project’s business objective might require a very large number of models to be routinely updated and quickly deployed, without the requisite budget and staffing. In the past, solution engineering would require rethinking the problem to stay within resource constraints. Automated machine learning technology can substantially lessen those resource constraints. Solution engineering is still necessary, but the horizon of solutions has broadened.

In the Model Development phase of the methodology, data is analyzed, variables or features are engineered, and the model that best fits the training data is identified. Automated machine learning speeds this phase of the process considerably, increasing the productivity of data scientists. That frees them up to fit more models and/or to give more effort to other high value aspects of the process (e.g., solution engineering, feature engineering, etc.). The technology also makes it possible for less skilled practitioners to generate high-quality models. Detailed knowledge of which algorithms are appropriate for certain analyses is no longer essential; automated machine learning takes over that function.

The third and final component of the 84.51° approach to machine learning is Model Deployment, in which the chosen model is deployed in production systems and processes. Given the scale of machine learning applications at Kroger—the sales forecasting application, for example, creates forecasts for each item in each of more than 2500 stores for each of the subsequent 14 days—this stage of the process is key. And as Scott Crawford points out, issues around deployment (or “productionalization,” as he refers to it) are often underestimated:

Prior to my current role facilitating the use of machine learning at 84.51°, my work experiences included building and deploying models at one of the nation’s largest insurance companies and one of the world’s largest banks. One commonality across all my experiences is that productionalization is often the most challenging phase of machine learning projects. The requirement of a production deployment often severely constrains the viable solutions. For example, productionalization might require code to be delivered in a specific language (e.g., C++, SQL, Java, etc.) and/or to meet strict latency thresholds.

Automated machine learning tools can help with the deployment process by generating code or APIs that embed the model. 84.51° often makes use, for example, of DataRobot’s ability to output Java code for data preprocessing and model scoring.

Many companies today are experimenting with machine learning, but 84.51° and Kroger have taken this AI approach to the next level. The “Embedded Machine Learning” initiative, standardizing on an automated machine learning tool, and the three-stage machine learning methodology have all helped to create a “machine learning machine.” Models are framed, developed and deployed in the same way that a well-managed manufacturing organization might create physical products. We’ll probably see multiple examples of this factory-like approach to machine learning in the future, but 84.51° is practicing it today.

Originally published on LinkedIn Pulse