
At first, the concept of an unfair machine learning model may seem like a contradiction. How can machines, with no concept of race, ethnicity, gender or religion, actively discriminate against certain groups? But algorithms do and, if left unchecked, they will continue to make decisions that perpetuate historical injustices. This is where the field of algorithm fairness comes in.
In this article, we will explore the concept of model bias and how it relates to the field of algorithm fairness. To highlight the importance of this field, we will discuss examples of biased models and their consequences. These include models from different industries that discriminate based on gender or race. We end by touching on how these can be biased in the first place.
What is algorithm fairness?
In machine learning, the terms algorithm and model are used interchangeably. To be precise, algorithms are mathematical functions like Linear Regression, Random Forests or Neural Networks. Models are algorithms that have been trained on data. Once trained, a model is used to make predictions which can help an automated computer system make decisions. These decisions can include anything from diagnosing a patient with cancer to accepting mortgage applications.
No model is perfect, meaning they can make incorrect predictions. If these errors systematically disadvantage a group of people, we say the model is biased. For example, an unfair/biased model could reject mortgage applications more often for women than men. Similarly, we could end up with a medical system that was less likely to detect cancer in black patients than white patients.
Algorithm fairness is the field of research aimed at understanding and correcting biases like these. Specifically, it includes:
- Researching the causes of bias in data and algorithms
- Defining and applying measurements of fairness
- Developing data collection and modelling methodologies aimed at creating fair algorithms
- Providing advise to governments/corporates on how to regulate algorithms
Why is algorithm fairness important?
As mentioned, machine learning models are being used to make important decisions. The consequences of incorrect predictions could be devastating for an individual. If the incorrect predictions are systematic then entire groups could suffer. To understand what we mean by this, it will help going over a few examples.
Apple recently launched a credit card — Apple Card. You can apply for the card online and you are automatically given a credit limit. As people started to use this product, it was found that women were being offered significantly lower credit limits than men. This was even when the women were of a similar financial position (and credit risk). For example, Apple co-founder, Steve Wozniak, said he was offered a credit limit 10 times higher than his wife.
Another example is a system used by Amazon to help automate recruitment. Machine learning was used to rate the resumes of new candidates. To train the model, Amazon used information from historically successful candidates. The issue is that, due to the male dominance of the tech industry, most of these candidates were male. The result was a model that did not rate resumes in a gender-neutral way. It actually went as far as penalising the word “woman” (e.g. Captain of the woman’s soccer team).
These examples show that models can make predictions that discriminate based on gender. Women, who are equal to their male counterparts, are faced with significantly different results. The consequence, in this case, is a lower credit limit or the rejections of a job application. Both of these outcomes could have serious financial implications. In general, models like these will increase the economic inequality between men and women.
Models can also discriminate based on race. COMPAS was an algorithm used by the American criminal justice system to predict if a defendant was likely to re-offend. An incorrect prediction (i.e. false positive) could result in the defendant being falsely imprisoned or having to face a longer prison sentence. It was found that the false positive rate was twice as likely for black offenders than white offenders. That is black offenders were twice as likely to be incorrectly labelled as potential re-offenders.
These examples show that we can find biased algorithms being used for different problems across many industries. The scale at which these algorithms make decisions is also a concern. A biased human is limited in the number of loans he could underwrite or people he could convict. An algorithm could be scaled and used to make all of the decisions. Ultimately, the consequences of a biased algorithm can be both negative and widespread.
How do algorithms become unfair?
Clearly, they are bad but how do we even end up with biased algorithms? Algorithm fairness\ bias is actually a bit of a misleading term. Algorithms, by themselves, are not inherently biased. They are just mathematical functions. By training one of these algorithms on data, we obtain a machine learning model. It is the introduction of biased data that will lead to a biased model.
Data can be biased for different reasons. Like with the Amazon hiring model, it could be due to a lack of minority representation. It could also be due to model features that are associated with race/gender. For example, due to the history of racial segregation in South Africa, where you live is very predictive of your race. Ultimately, the data we collect will reflect historical injustice which, through training, can be captured in models.
Bias in data and how we fix the bias in models is a complicated issue. In the future, I hope to write more about algorithm fairness. I’d like to go into more detail about how data and models can be biased. Following this, I’d like to explain the ways we can measure bias and also the methods we can use to ensure we do not end up with biased models.
Related to the concept of fairness is model interpretability. In determining fairness, we are also trying to understand how a model makes predictions. In other words, we are interpreting the model. In general, the more interpretable a model the easier the analysis around fairness will be. You can read more about interpretability in my article below:
Image Sources
All images are my own or obtain from www.flaticon.com. In the case of the latter, I have a “Full license” as defined under their Premium Plan.
References
D. Pessach & E. Shmueli, Algorithmic Fairness (2020), https://arxiv.org/abs/2001.09784
, A Gentle Introduction to the Discussion on Algorithmic Fairness (2017), https://towardsdatascience.com/a-gentle-introduction-to-the-discussion-on-algorithmic-fairness-740bbb469b6
J, Vincent, Apple’s credit card is being investigated for discriminating against women (2019), https://www.theverge.com/2019/11/11/20958953/apple-credit-card-gender-discrimination-algorithms-black-box-investigation
S. Wachter-Boettcher, Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech (2017), https://www.goodreads.com/book/show/38212110-technically-wrong
The Guardian, Amazon ditched AI recruiting tool that favored men for technical jobs (2018), https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine
Wikipedia, Algorithmic bias (2021), https://en.wikipedia.org/wiki/Algorithmic_bias
Original post: https://towardsdatascience.com/what-is-algorithm-fairness-3182e161cf9f