Skip to content

Ethical Implications of Industrialized Analytics

As analytics are embedded more and more deeply into processes and systems that we interact with, they now directly impact us far more than in the past. No longer constrained to providing marketing offers or assessing the risk of a credit application, analytics are beginning to make truly life and death decisions in areas as diverse as autonomous vehicles and healthcare. These developments necessitate that attention is given to the ethical and legal frameworks required to account for today’s analytic capabilities.

ANALYTICS WILL CREATE WINNERS & LOSERS

In my recent client meetings and conference talks, such as the Rock Stars of Big Data event in early November, a certain question has come up repeatedly. When I discuss the analytics embedded in autonomous vehicles, I am often asked about the ethics and legalities behind them. A lot of focus is given to the safety of autonomous vehicles and rightly so. If the automated analytics in the vehicle don’t work right, people can die. This means a lot of scrutiny is being, and will continue to be, placed on the algorithms under the hood. I often make the point that the technology for autonomous vehicles will be ready well before our laws and public opinion are able to let them loose on the streets.

However, it is very important to remember that no matter how well developed the algorithms, there will be cases of people who get killed or injured due to an autonomous vehicle malfunction who arguably would not have otherwise been harmed. Such cases are regrettable and certainly horrible for those involved. But the fact is that any such new technology results in injuries and deaths that wouldn’t otherwise have occurred. Think about all of the car accidents that never would have happened if we didn’t have cars. There was no risk of hitting something head-on at 60 mph and so in many ways, horses were safer. However, horses had their own risks and weren’t completely safe either. We’re really constantly trading one set of risks for another. Sometimes we just don’t have a good grasp on the risks.

DON’T LET THE EXCEPTIONS ESTABLISH THE RULES

To illustrate my point, I often tell the story of a family friend I had growing up. She was in a bad car accident and the police said that she survived only because she was NOT wearing a seat belt. A car t-boned her at high speed on the driver’s side. She saw it coming and jumped to the side towards the passenger seat, thus saving herself. Had she had on a seatbelt, she would have been trapped and died. Does this mean that we should abolish seat belts because they can cause some people like my friend to die in an accident? Of course not! Seat belts save far more lives than they cost. My friend was an exception.

This is a very important point. There WILL be people who die due to wearing a seatbelt who otherwise would have lived. As a society, we have accepted that because the number of people saved by wearing a seatbelt is so much higher. We have made an ethical decision to accept some exceptional events and our laws reflect that. You won’t win a lawsuit saying that you were injured due to wearing a seatbelt, for example, because it is taken as fact that seat belts are the safest option.

FOCUS ON THE NET GAIN, NOT THE EXCEPTIONS

Similarly, we’ll have to take a bigger view of autonomous cars, or automated medical injections, or similar processes that will involve analytics making decisions on our behalf that have the potential to harm or kill us. There will certainly be cases where an autonomous car ends up in a wreck that a human arguably could have avoided because it was a very unusual situation that the algorithms couldn’t handle. There will also be cases where a patient is harmed by too much or too little medicine being distributed by an algorithm.

What we have to do is carefully assess the ratio of those unfortunate outcomes to the positive outcomes. For example, an autonomous car will never fall asleep and run off the road. Thus, many injuries or deaths that otherwise may have occurred will be avoided. Similarly, many people will forget to take medicine or will inject or ingest the wrong dosage when doing it manually. So, it isn’t like there were no problems before an automated dosing process was put in place. The critical question is simply: Do we have a much lower death and injury rate with our automated analytic processes than we had before? If so, we should be comfortable implementing them.

LAWS AND PUBLIC OPINION MUST CATCH UP

With the spread of automated analytics, ethical questions will arise frequently. It is incumbent upon those of us in the analytics profession to campaign for proper analysis of the risks and rewards and to push society towards a rational assessment of these technologies. Our laws also have to take into account what is a freak accident and what is truly a case of negligence or liability.

None of us wants to be the unusual person who is killed by an algorithm that goes wrong. However, if the risk of death by that algorithm is far smaller than the risk of death prior to the algorithm being implemented, aren’t we much better off in aggregate? In the coming years, we’ll have some big decisions to make as a society in terms of how much lowering of risk must be achieved for automated analytics to be implemented. If we make those choices wisely, I believe that we’ll benefit from a lot of innovation that improves our lives and lowers our risks in the aggregate by quite a bit. I look forward to my first ride in an autonomous car!