Is AI Destroying Equality?

Two huge things went wrong in the internet era: lack of inclusion and lack of impartiality. Therefore, it is hardly surprising that in less than 24 hours, Microsoft’s AI bot Tay began tweeting racist and inflammatory statements, and Google’s deep learning platform misidentified gorillas and dark-skinned people in pictures. Against this background, it does not appear very reasonable to leave it up to companies to define the values implemented in AI applications.

If we keep in mind that over half of the world is still offline today and can’t benefit from the latest technologies, it is clear that this disparity will continue to deepen in AI applications without correction. Studies have shown that lacking inclusion and impartiality has a disproportionately negative effect on marginalized groups and, at the same time, profits an advantaged few. For the most part, AI ethics counsels and groups try to achieve inclusive applications by making rules and standards for engineers, funders, and controllers. But the different interpretations of intelligence, morals, and ethics are, if not balanced, construed in favor of already dominant world-views. Moreover, there are valid concerns over scientists potentially losing control over AI applications, especially since many of the brilliant people who are creating these applications don’t necessarily completely understand how it works.

Let’s get it right this time

To Sandhya Venkatachalam, lack of inclusion and lack of impartiality means that as a global society, we didn’t get enough people accessed quickly enough and did not ensure that internet-based technologies and solutions were unbiased. This goal has been missed with large parts of digitization, yet they still need to be a way for AI solutions to overcome these problems. AI technologies have far-reaching consequences.

Therefore, it is vital to find a method that includes all people and perspectives and integrates these into AI processes in general. Venkatachalam compares this hurdle to the internet boom: who benefits from advances in AI and how?

“When technology revolutions occur, usually something that was previously very expensive gets much cheaper and becomes ubiquitous. If “connecting and communicating” became dirt affordable with the internet, I would argue that “analyzing and predicting” will become dirt cheap with AI. Who will benefit from this? People and organizations that have unique access to data.” — Sandhya Venkatachalam

And then there would be the somewhat more subtle and yet severe problem surrounding biased data. If we want to derive predictions from large amounts of data, we have to make sure that we can estimate the amount of data and interpret it correctly. That this can be a difficult task is described in more detail in my article Unchecked AI Can Mirror Human Behavior. For example, since most collected health data comes from white men, many AI machines are currently skewed to find experimental drugs that perform best for white men.

Getting serious about ethical AI is no easy task

Applying ethical AI concepts to real-world implementation settings necessitates a considerable amount of time and effort from regulatory and legal departments to data science teams. Although there is no one-size-fits-all solution when it comes to responsible and inclusive AI applications, it should not hold companies back on trying to achieve ethical AI applications using an appropriate mix of the latest research, legal precedents, and professional best practices.

Ethical AI should be the standard, not the exception. And if we are serious about this, we must learn to listen to the voices of those who do not yet wield strength and control. If we don’t, AI will shift from being the world’s most excellent problem solver to the world’s most significant facilitator of injustice.

Original post: https://medium.com/digital-diplomacy/is-ai-destroying-equality-78b043d6ea64

Leave a Reply

Your email address will not be published. Required fields are marked *