The Ancient Greek playwrights knew how to tell a good story, but occasionally found themselves searching for a way to solve whatever conflict they had concocted. So they invented the “deus ex machina”—literally, god from the machine—in which an actor playing a god was brought on stage via a mechanical device to miraculously resolve the problem as only a god can do.
These days, artificial intelligence (AI) is becoming our version of the deus ex machina, promising to swoop in and solve our most pressing business problems. But, like the Greek gods, AI can be fickle and fallible.
AI has the potential to significantly improve the way we make decisions. It can also make recommendations that are unfair, harmful, and fundamentally wrong. There are a lot of ways bias can make it into our models, from poor data quality to spurious correlations.
Fortunately, though, by applying technological, ethical, and legal governance around the development and use of AI, we can significantly reduce the impact of bias in our models.
Forms of bias
There are two main kinds of bias in AI.
The first is algorithmic bias, which comes from poor or unrepresentative training data. If we’re training our models to make decisions for a set of people, for example, but our training data does not apply to that population, then our results are going to be off. The second is societal bias, which comes from our own personal biases, assumptions, norms, and blind spots.
Predictive policing tools are a useful example of both types of bias. Location-based policing algorithms draw on data about events, places, and crime rates to predict when and where crimes will happen. Demographic-based algorithms use data about people’s age, gender, history of substance abuse, marital status, and criminal record to predict who might commit a crime in the future. Dozens of cities in the U.S. use PredPol and COMPAS, the most common of these tools.
However, predictive policing tools sometimes produce racist results. If these models are fed using data that is biased against people of color, they will produce outcomes that echo the bias. If we don’t bake in layers of governance to limit this kind of bias, we could see—and in fact, we are seeing—devastating real-world consequences.
The first layer of protection against AI bias is technical governance. The mathematical methods used to build our algorithms, and the testing and feedback loops required, must ensure that machine learning models are as accurate and reliable as possible.
Maybe you only use half of your training data to train the model and use the other half to test it. Or perhaps you test for spurious correlations before training your model. Whatever your methods, it’s crucial to evaluate your data for the potential to create algorithmic bias before training your models.
Ethical governance helps stakeholders balance the benefits and trade-offs for those who will be using or who might be affected by the algorithms.
For example, it might be safer to build a model that generates false negatives than false positives. Using AI to screen for skin cancer is a useful example. AI can look at moles on the skin to predict whether they are cancerous. When building cancer-detecting models, stakeholders have to take into account the potential impact on a patient who thinks they don’t have cancer when they do, or who thinks they do have cancer when they don’t.
Having a dedicated committee to evaluate the ethics of a model—or simply stakeholders who are empowered to ask pointed questions—can go a long way toward harm reduction.
The U.S. government is quickly catching onto the need to enact legislation aimed at reducing or eliminating bias. In 2021 alone, AI bills or resolutions were introduced in at least seventeen states and were enacted in Alabama, Colorado, Illinois, and Mississippi.
One Illinois bill requires employers that rely on AI to determine whether applicants qualify for an in-person interview to report demographic information to the Department of Commerce and Economic Opportunity. The department is then required to analyze the data to decide whether “the data discloses a racial bias in the use of artificial intelligence.” This kind of legislation is helpful because it motivates businesses to do their homework in reducing algorithmic and societal bias.
The future of AI
Like most great technological advances, AI can be a force for good and bad. To stay on the right side of humanity, governance processes must be wrapped around its development and use.
At ServiceNow, AI governance is treated as an extension of integrated risk management across every aspect of technology development and operations. By helping organizations operationalize the way they provide governance across all these areas, AI can be safely scaled in a way that continues to deliver a positive contribution to our lives.
Editor’s note: The author would like to acknowledge these publications as having influenced both this article and how he thinks about AI.
- Julia Angwin et al, ‘Machine Bias’ ProPublica (online, 23 May 2016)
- Matt J Kusner and Joshua R Loftus, ‘The Long Road to Fairer Algorithms’ (2020) 578 Nature 34
- Dawson D and Schleiger E , Horton J, McLaughlin J, Robinson C∞, Quezada G, Scowcroft J, and Hajkowicz † (2019) Artificial Intelligence: Australia’s Ethics Framework. Data61 CSIRO:
- Natalia Mesa, Neuroscience, University of Washington ‘Can the criminal justice system’s artificial intelligence ever be truly fair?’ Massive Science (online May 13 2021)
- Greg Brockman, Mira Murati, Peter Welinder extract : How will OpenAI mitigate harmful bias and other negative effects of models served by the API? (September 18, 2020)
- Ben Dupre, ’50 Ethics Ideas You Really Need to Know’, (Quercus 2013)
OpenAIOpenAI API Massive ScienceCan the criminal justice system’s artificial intelligence ever be truly fair? Industry NatureThe long road to fairer algorithms ProPublicaMachine Bias
Original post: https://www.forbes.com/sites/servicenow/2021/11/05/governing-the-future-of-ai/?sh=24ef2e682d2a&s=09