Objective Algorithms Are a Myth

In Coded Bias, a documentary by Shalini Kantayya, the director follows MIT Media Lab researcher and Algorithmic Justice League founder Joy Buolamwini as she discovers one of the fundamental problems with facial recognition. While working on a facial recognition art project, Buolamwini realizes that the computer vision software was having trouble tracking her face, but it worked fine when she put on a white mask. It was just the latest evidence of the type of bias that’s baked into facial recognition and A.I. systems

Along with Buolamwini, Kantayya interviews authors, researchers, and activists like Cathy O’NeilMeredith BroussardSafiya Noble, and Silkie Carlo, unraveling the problems of current technology like facial recognition or crime prediction software. These technologies often connect back to the dark historical practices of racialized surveillance, eugenics, or physiognomy.

The film, which was screened at Sundance, focuses a critical eye on the assumed “objectivity” of algorithms, which O’Neil defines as “using historical information to make a prediction about the future.” While algorithms are often understood as unbiased, objective, and amoral, they can reproduce the biases of the humans that create them. Broussard says that we imbue technology with “magical thinking,” which lauds its benefits but obscures its negative effects.

Coded Bias explains how algorithmic bias can have negative effects on the real world. The film depicts how a Houston school district used a secret algorithm in their “value added” teacher evaluation system, which classified even award-winning teachers as “bad teachers”; or how the facial recognition software police use often misidentifies Black suspects.

The film also shows how the technologies that are deeply embedded in our lives augment existing asymmetrical power dynamics. The algorithms that shape people’s lives are often hidden in a “black box” — built by large tech companies who use proprietary rights protections to block the public from knowing how their algorithms work and what data is being collected.

Kantayya talked to OneZero over the phone about how she learned about algorithmic bias, and how she hopes Code Bias can empower citizens to understand and protect their rights.

Coded Bias will be released in select theaters this fall. This Q&A has been edited for length and clarity.

OneZero: How did you come to learn about algorithm bias and what inspired you to make a film about it?

Kantayya: A lot of my work is on disruptive technology and how they make them more or less fair and equal. Issues around race, gender, and class are things I tend to think about, and I discovered the work of women like Joy Buolamwini, Cathy O’Neil, and Zeynep Tufekci. I became interested in the dark underbelly of big tech and that sort of sent me down the rabbit hole.

What are some of the instances of algorithmic bias that are featured in the film?

There was an Amazon algorithm that wasn’t trying to be biased, but the algorithm picked up on past sexism in hiring practices and retention, and started to sort out any woman who had a women college or sport on her resume. So unknowingly, this A.I. discriminated against women in the hiring process. The central part of the film is that facial recognition doesn’t work as well on dark faces or on women, and yet those are the people who are most targeted by racial profiling. Just recently, a man in Detroit was held for 30 hours based on being wrongly identified by facial recognition. So the examples are really everywhere.

What do companies, people who are building this tech, need to do to weed out some of the bias that gets encoded in their algorithms?

The audience of my film isn’t actually technologists. The audience is actually citizens. Should companies be more responsible or inclusive in their hiring practices or who is in the room when these technologies are being built? Absolutely. But my focus really isn’t on the companies. To me, it isn’t about making a perfect algorithm, it’s about making a more humane society. The film is about empowering citizens to understand these tools so that we can have laws and protections against these practices; so that it won’t be up to tech companies to sell their technology to the FBI or police or other entities without an elected person being in the loop. Just the fact that corporations are doing this doesn’t make it better than when a government does it. I think citizens need to start demanding from Google and Apple that they do not replicate authoritarian surveillance technology.

What are some of the ways ordinary citizens can fight against these systems of surveillance?

I think we have a moment where big tech is listening. In the last month, IBM said it would basically get out of the facial recognition game — stop researching it, stop offering it, stop selling it. Amazon said it would press pause for one year on its sale to police, and Microsoft said it would stop selling to police as well. So I feel like because we have this movement for equality on the streets, we have a moment to actually pass meaningful legislation. And I think we need to push for legislators that will protect our data as a part of human rights and will protect us from these invasive technologies that violate our civil rights.

Original post: https://onezero.medium.com/objective-algorithms-are-a-myth-22b2c3e3d702

Leave a Reply

Your email address will not be published. Required fields are marked *