Ethical AI: Who Watches the Watchmen?
The Axial Age saw the independent rise of multiple societies, each with their own set of religious and cultural practices. The silk road arguably marked the end of the Axial Age, connecting these previously isolated cultures together in a network that traded not only goods and germs, but also ideas.
And the door was forever opened to the challenges of multicultural exchanges. It’s obvious that the digital age has done something similar—connecting the world’s societies in unprecedented ways. It’s enhanced our ability to exchange goods and ideas too. And we’ve felt the excitement as well as the growing pains along the way.
For several years now, artificial intelligence has promised us new possibilities—a lighter burden with greater resource efficiency. But this is against the backdrop of an internet that has turned out to be a tool of social interaction.

Computers are of course computing, crunching numbers and executing tasks humans aren’t so good at. But it turns out that these computations are meant to enable our sociality.

Computers don’t just crunch numbers, they crunch numbers to figure out who to connect with whom, what prices to set on goods, how to get people from point A to point B, which crowd-sourced entries are true or false.
It is inevitable that in mediating social processes, AI must make ethical judgment calls. And it is also clear that AI simply reflects the values inherent in its datasets and imbued by its programmer.
We now layer on top of this that only a very small fraction of the population today will program AI, that only a very small fraction of those algorithms will wind up being used in production, and that only a very small number of those will be used in the tech monopolies which serve the vast majority of the population around the globe.
This means that the values of a particular subset of people will be making ethical judgment calls for others who may not share those ethical principles. Do we care that Silicon Valley, arguably one of the most liberal places in the most liberal countries in the world, and Chinese giants, like WeChat and TaoBao will be systematically arbitrating the world’s values? How do we begin to deal with these digital multicultural exchanges?
Anthropologists have had to create their own frameworks for understanding and evaluating cultural differences. They provide helpful tools for beginning to think about these tough questions.
First, we must begin with trying to understand how we make sense of other cultures. We can all recognize that we are biased observers, we have been raised in a culture which evaluates other cultures through the lens of our own. At least for those of us in the West, anthropologists Richard Shweder and Edmund Osgood have identified at least three main ways to compare cultures and their values: universalism, evolutionism, and relativism. Each have their strengths and weaknesses.

Universalism

Universalism is a mode of thought which makes sense of cultures by finding and highlighting similarities. This is naturally attractive, because if we can find common ground, perhaps our cross-cultural difficulties will evaporate. We can create AI that will operate based off of universally agreed-upon principles.
The difficulty with this approach, however, is that identifying universals involves looking at higher-order similarities, and discarding evidence which indicates that cultures are meaningfully different. For instance, psychologists have noted that fairness and justice are values in every society.
That may be so, but psychologists mean this rather abstractly. Justice does not play itself out in the same ways in every culture, and so the very concept of justice comes to be vacuous in a universalist respect. Americans pride themselves on their pursuit of equality, and may shudder at the concept of a caste system.
Yet other cultures may be quick to note that we deny minors the right to vote, and defer many of their medical and social decisions to a designated guardian. Anthropologists note that to other cultures, the difference between castes may seem as obvious to them as the difference between children and adults does to us.

Justice may well be in every culture, but the universal concept of justice is so abstract, that it is meaningless until particular cultural practices provide us a working concept of what is meant by justice—a point at which our elusive similarities slip between humanity’s fingers.

Even within the same country there is vast disagreement about how to implement moral principles. As I have written before, algorithmic fairness is troubling partly because common ideas about “fairness” have been proven mathematically incompatible with one another.
Algorithms which predict recidivism in order to aid in rehabilitative sentencing can either have equal risk score cutoffs across races and genders, or their predictive errors can be equal across races and genders. But both cutoff scores and error rates cannot me made equal simultaneously. Which algorithm is most “just?” Can a universally, ethically-acceptable algorithm exist? It seems dubious, at best.

Evolutionism

At the realization that universals may rarely be meaningfully implemented, some might be tempted to take an evolutionary stance with regard to understanding cultural ethics. That is, some might be tempted to say that one culture’s values are more “correct” or morally “right” than others, that the favored morals have emerged because the society or culture is more advanced in some way.
Certainly, many Silicon Valley companies, whether wittingly or not, take this approach. Hate speech on the basis of sexual orientation, skin color, gender, or religion is identified by these companies as wrong—while the definitions of hate speech and the groups protected from hate speech privilege progressive American values.

Sharia law which condemns homosexuality and which promotes violence against homosexual and non-binary individuals clearly operates out of an alternative ethical framework from the liberal individualism expressed in the US.

Twitter, Facebook, Google and the like suspend accounts and flag content which promotes these ideologies. They are taking a moral stance, one which suggests that these tech companies believe their moral ideology is superior.

As a citizen of the US, I strongly disapprove of Sharia law—but how do I defend that my position is the right one?

Likewise, China’s social credit system is heavily criticized in the United States for the presumption that the government’s values are superior in some way. Although we rarely blink at the removal of hate speech online, many of us in the US think that China’s sort of algorithmic moral babysitting is the stuff of dystopic movies.
It beckons recollections of atrocities that occurred under authoritarian regimes, like Stalin’s or Hitler’s. I don’t need to spend much time explaining why the evolutionary approach to programming ethics into AI is as appealing as it is problematic. We think some morals are better than others, they just happen to be our morals.

Relativism

The third main way we tend to make sense of other cultures is to think of them relativistically. That is, to presume that no one culture is better than another—they are simply different but equal.
This kind of an approach is attractive to postmodernist mindsets which like to use phrases like “my truth” or “your truth,” as though what is right and good for one person does not hold up universally. Again, this approach is both useful and problematic.
It is admirable to be able to recognize the dignity of each culture, and it’s attractive to be able to respect rich histories of ethics and morals of others. It also rescues us from the moral colonialism which is the natural extension of evolutionary approaches to morality.

That being said, this position means we also cannot take a hard moral stance—we cannot claim that one culture’s morality is better than another.

In terms of our AI, this position is likely to allow cultural norms to dictate which ethics are implemented in a given circumstance. One driverless car dilemma study highlights these differences.
In eastern countries, let the self-driving car kill the young just as much as the old, and kill the unlawful while sparing the lawful. In southern countries, let the self-driving car kill the obese and spare the fit, and kill the homeless and spare the wealthy. In the western countries, let the self-driving car stay its course, killing whoever is in the way and sparing whoever is not—unless of course, there are different numbers at stake, in which case kill the few to spare the many.
But there is something nauseating about the thought of calling all these ethical algorithms different but equal. Something about their moral implications draws us to claim that we think some of these moral values are better or worse than others.
And here we find ourselves at the beginning, hoping to grasp onto some sort of a universal value set. And indeed, sometimes we can find a fruitful point of agreement.

Most people value human lives over the lives of dogs and cats. Most people value more lives over fewer lives. And yet, these pieces seem to be a drop of water in the ethical ocean AI is facing

We know that the vast majority of algorithms are not universally morally palatable. Additionally, these decisions are happening behind the protection of tech monopolies’ NDAs. We often don’t know the ethical fabric of AI unless someone happens to audit a specific algorithm, an employee whistle-blows, or someone brings a lawsuit for discriminatory treatment. Who’s watching the watchmen?
I’m building out the infrastructure of the dWeb with ERA. If you enjoyed this article, I’d love to connect with you on Twitter!

1 comentário em “Ethical AI: Who Watches the Watchmen?

Leave a Reply

Your email address will not be published. Required fields are marked *