What is the acceptable failure rate of an airplane? Well, it is not zero… no matter what how hard we want to believe otherwise. There is a number, and it is a very low number. When it comes to machines, computers, artificial intelligence, etc., they are perfectly imperfect. Mistakes will be made. Poor recommendations will occur. AI will never be perfect. That does not mean they do not provide value. People need to understand why machines may mistakes and set their beliefs accordingly. This means understanding the three key areas on why AI fails: implicit bias, poor data, and expectations.
The first challenge is implicit bias, which are the unconscious perceptions people have that cloud thoughts and actions. Consider, the recent protests on racial justice and police brutality and the powerful message that Black Lives Matter. The Forbes article AI Taking A Knee: Action To Improve Equal Treatment Under The Law is a great example of how implicit bias has played a role in the discrimination and just how hard (but not impossible) it is to use AI to reduce prejudice in our law enforcement and judicial systems. AI learns from people. If implicit bias is in the training, then the AI will learn this bias. Moreover, when the AI performs work, that work will reflect this bias… even if the work is for social good.
Take for example the Allegheny Family Screening Tool. It is meant to predict which welfare children might be at risk from foster parent abuse. The initial rollout of this solution had some challenges though. The local Department of Human Services acknowledged that the tool might have racial and income bias. Triggers like neglect were often confused or misconstrued by associating foster parents who lived in poverty with inattention or mistreatment. Since learning of these problems, tremendous steps were taken to reduce the implicit bias in the screening tool. Elimination is much harder. When it comes to bias, how do people manage the unknown unknowns? How is social context addressed? What does “right” or “fair” behavior mean? If people cannot identify, define, and resolve these questions, then how will they teach the machine? This is a major driver AI will be perfectly imperfect because of implicit bias.
The second challenge is data. Data is the fuel for AI. The machine trains through ground truth (i.e. rules on how to make decisions, not the decisions themselves) and from lots of big data to learn the patterns and relationships within the data. If our data is incomplete or flawed, then AI cannot learn well. Consider COVID-19. John Hopkins, The COVID Tracking Project, U.S. Centers for Disease Control (CDC), and the World Health Organization all report different numbers. With such variation, it is very difficult for an AI to gleam meaningful patterns from the data let alone find those hidden insights. More challenging, what about incomplete or erroneous data? Imagine teaching an AI about healthcare but only providing data on women’s health. That impedes how we can use AI in healthcare.