Throughout history, several tech inventions had triggered debate regarding the ethics of its implementation –or even the principle behind its existence. Some of us may have been fortunate to witness the debacle caused by the successful reproductive clone of Dolly the sheep decades ago. It had led to questions about the possibility of a reproductive human clone, and many were triggered by both the philosophical and practical aspect of it.
A similar nuance happens with Artificial Intelligence (AI) and the ethical issues surrounding it which, according to the World Economic Forum, there are nine of them.
When it comes to the discussion on the ethics of AI and how the tech can “go wrong”, the public’s minds remain affected by scenes in Hollywood movies (“What if it turns against us and starts shooting people on the street?”). While we must be open to all possibilities, there are more grounded concerns on the ethics of AI.
In this article, we are going to have a look at the various aspects of ethics and AI.
On the issue of fairness
On the first day of the virtual EmTech Asia 2020 conference, organised by Koelnmesse Pte Ltd and MIT Technology Review, Google director of research and renowned scientist Peter Norvig presented about the issue of fairness in the implementation of AI.
He brought the example of Google Images search results. In the past, when users searched for the keyword “doctor”, the images would likely be of Caucasian men in white lab coats. But today, there is more gender and racial diversity in the results.
Reflected through data quality and decision-making process, Norvig pointed out that fairness, when achieved through unawareness, remains a problem. There is got to be a conscious effort from scientists and engineers to ensure it, but this process can be complicated.
“There is the mathematically impossible approach of … If you are maximising one thing, then you cannot maximise the rest,” he explained. “A parallel example would be getting user manual in English, Chinese, or Spanish which is sufficient only if you speak these languages. But to develop a manual available in every language would take developers away from developing quality products.”
Despite this challenge, Norvig believes that it is not enough for scientists and engineers to simply minimise errors.
“We want to convince people that fairness does exist,” he stresses.
As his recommendation, he suggested for product developers to pay attention to the following checklist: Data collection, model and objective choice, testing, deployment, and monitoring/maintenance.
When asked by an audience member about the success rate of tech companies in trying to reduce bias, Norvig says that this checklist has helped developers to be more aware of it.
“We also keep on adding new factors. Back then we focussed on security, but now we also focus on fairness,” he reveals.
What they don’t talk about when they talk about AI: Human labour
This might come off as surprising, as the aspect of human labour is often missed in the discussion about AI and ethics. But anthropologist Mary L. Gray elaborated in her book Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass about the unseen human labour that is working to enable the implementation of AI as we see today.
These people are working on tasks such as data labelling and there is a growing population of people offering the service. But Gray defined the work as a “ghost work” as the labour conditions are often “devaluing and hiding the people who contribute to better search results and projects in the startup world.”
“As these people are largely unregulated, they are seen as easily disposable and often seen as temporary help,” she pointed out.
While the prospect might seem daunting at first, Gray stresses that these jobs can actually give value especially in time of a global health crisis like this. But there are steps to take to prevent and stop potential abuse.
She stressed how there is currently no law that governs on-demand contract workers; the public has to keep on pushing for one.
“The market isn’t going to fix this,” she warned.
Tech and human
In February, in a workshop organised by the Pontifical Academy for Life in the Vatican, Pope Francis stated that “… the digital galaxy, and specifically artificial intelligence, is at the very heart of the epochal change we are experiencing.”
This reflects the importance of AI technology in our lives today. Ideally, we want to be able to implement it in the most ethical way possible.
But now that we have seen the complexity of the issue of fairness in AI implementation and the more practical labour issues behind it, it is time to examine the big question: Is technology only as good as the human behind it?
This might remind you of the argument about knives: As a tool, it is neither evil nor good. Its value depends on the person who uses it, and what they are using it for.
Does this mean that the concerns are real? That AI is nothing more than a knife and that we can only hope it does not fall into the wrong hands?
Norvig sent out a hopeful message. When an audience member raised this concern, he reminded why humans are developing tech in the first place.
“We have the system so that they can do better than we can,” he stresses.