Let’s talk about ethical issues in machine law. Specifically, I want to discuss the question of Artificial Intelligence and legal personhood. Could we grant an A.I. civil rights and let it represent itself or get represented by a lawyer in a court of law? Roman Yampolskiy, in his inspiring article: AI Personhood: Rights and Laws, discusses the possibility of legal representation of A.I.’s, its role in business automation, problems, difficulties, moral dilemmas and possible solutions.
Non-human entities like legal firms, corporations and governments are known to possess legal personhood. Yampolskiy argues that certain legal rights and privileges could be granted to autonomous systems today through particular loopholes in the legal system, without significantly changing/modifying existing laws.
“Professor Shawn Bayern demonstrated that anyone can confer legal personhood on an autonomous computer algorithm merely by putting it in control of a limited liability company (LLC). The algorithm can exercise the rights of the entity, making them effectively rights of the algorithm. The rights of such an algorithmic entity (AE) would include the rights to privacy, to own property, to enter into contracts, to be represented by counsel, to be free from unreasonable search and seizure, to equal protection of the laws, to speak freely, and perhaps even to spend money on political campaigns. Once an algorithm had such rights, Bayern observed, it would also have the power to confer equivalent rights on other algorithms by forming additional entities and putting those algorithms in control of them” (Thomson, S.J. 2020, p.2).
How is granting A.I. legal personhood going to impact human safety and dignity? One of the points worth being concerned about is that assigning legal liability to an A.I. does not involve specifying the level of algorithmic complexity. Meaning that the simplest combination of “If…then” statements could technically take-over the decision-making process of an entire company. On the other hand, a full blown and autonomous artificial intelligence could prove to be much more harmful in the long run, given that its effects will be much harder to reverse. I.e. do we want an efficient super-intelligence to govern our lives (which are becoming more and more reducible to data-points) or, do we prefer to be monitored by a “stupid” or “bad” algorithm? The answer’s not that obvious if we think in terms of crises. But more importantly, do we want to live in a world where artificial systems have civil rights while minorities all over the world still struggle for legal recognition? Think about it. Something is terribly wrong with this picture, leaving all else aside, our priorities seem to be off.
And let’s face it, what would a predator-like profit-oriented corporate entity want more than to completely automate its processes? A company run by algorithms will eventually win out over their human competitors leaving millions in unemployment and desolate poverty. Who is to say that legal representation won’t “spill over” into the public sector? Especially with the help of deep fakes and increasingly sophisticated imitations of human persons. What would prevent an A.I. from running a political campaign? Who knows what an algorithm could sacrifice for efficiency? Human autonomy? Dignity? Life itself? What if our morality proves too primitive for autonomous systems and they find “better” ways to run the world?
Yampolskiy, attempts to offer some solutions to the danger. First, we could have laws that specifically exclude A.I.’s from legal representation. Second, we could have a legal cap on wealth accumulation. Thereby, deterring the threat of Algocracy, a concept we discussed in our previous post on John Danaher’s text. However, Yampolskiy is pessimistic about these solutions. There’s simply too much corporate and government interest that works against them. Nevertheless the problem is very real:
“Overall, it is important to realize that just like hackers attack computer systems and discover bugs in the code, machines will attack our legal systems and discover bugs in our legal codes and contracts” (Thompson, S.J. 2020).