Recently, when Assurant Solutions compared decisions made by highly trained experts with decisions made by ‘silicon-based intelligence,’ computerized judgments based on analytics, they were able to double and in some cases triple relevant performance metrics. That’s the story told in a recent Sloan Management Review article, Matchmaking with Math: How Analytics Beats Intuition to Win Customers, by Cameron Hurst as interviewed by Michael S. Hopkins and Leslie Brokaw (SMR’s December 15, 2010 issue).
Assurant Solutions is in the financial services industry, and like many of their peers, they run a robust, state-of-the-art call center. A key function of the call center is to retain customers who call in to cancel their payment protection insurance. The best industry standard retention rates of 15%-16% still means that 5 out of 6 customers were not retained, and Assurant Solutions sought a way to lower that number. Since they had a lot of data, they decided to try something completely different: they brought in mathematicians, instead of call-center experts, to find relationships in the data. What they found were relationships that defied conventional logic in some cases, and would never have been found without the volume of data in their database.
Conventional wisdom dictated that improvement in operational experience, i.e. reduction in wait time, was directly related to customer satisfaction, which was reflected in retention rates. So state of the art centers strove to continually cut down wait time. But analysis of the data suggested that there might be other ways to increase retention such as matching customer service reps (CSRs) with customers based on rapport and affinity. When the call-routing algorithm was changed to reflect this correlation, retention rates increased.
The Assurant story doesn’t end here, however. The data also suggested ways to micro-segment the customers, instead of macro-groupings. The micro-segments made it possible to identify the high degree of variability in both the customer base.
Similar analysis identified corresponding variability in the CSRs, making a matching algorithm more accurate and ultimately more successful. The analysis of thousands of interactions indicated that the 80/20 rule (80% of the calls need to be answered in 20 seconds or less) was really outdated.
Customers were not that uniform in their response to wait-time. They found that most customers were ok with wait times double the convention or more, and Assurant managers were able to test various other scenarios such as 60/60. At some point there is a negative effect due to waiting too long, but it’s much longer than 20 seconds and that extra wait time gives Assurant the opportunity to make a better match between customer and CSR. And while every call center knows that some customers are worth more revenue than others, making them higher priority customers to save, Assurant can identify these customers in a more granular way, making additional decisions on how to manage them, and ultimately saving them as customers.
This article highlighted a number of counter-intuitive conclusions that dramatically changed the operations at the call center for Assurant. In the end, when conventional wisdom says that state of the art saves means retaining $.15-$.16 on the dollar, Assurant is able to boast retention well into the high 40s, and some days it’s been as high as $.58. It’s not a surprise that having good data and good analysis can produce good results, but in the Assurant story, we see an example of counter-intuitive results that blow away the performance metrics of even the best call-center “carbon-based intelligence,” the subject matter expert.