When science fiction writer Isaac Asimov introduced the Three Laws of Robotics to the world in 1942, practical robotic applications such as industrial pneumatic arms, all-transistor calculators and even the term “artificial intelligence” itself were all still a decade or two in the future.
Asimov’s laws boil down to three simple maxims: protect humans; obey humans; if it doesn’t violate rule one or two, protect itself. Seems simple and sensible enough, yet the limits and internal tensions of these basic laws have inspired writers to dream up a wide range of science fiction dystopias, from 2001 to Blade Runner to the Terminator. And let’s not forget to add Asimov’s own collection of stories, I, Robot, which features the Three Laws, to the list.
For business leaders, ushering in an AI-driven global calamity isn’t a top-of-mind concern, but even avoiding smaller risks can be a major challenge. Businesses must figure out how to deploy AI in a way that does not harm consumers, violate consumers’ privacy or otherwise run afoul of laws. If they can’t, it could trigger massive lawsuits.
Facial Recognition: An AI Success Story Or A Cautionary Tale?
Consider an AI application that is beginning to touch all our lives: facial recognition. Consumers can use it to sort digital photos and open the lock screens on their mobile phones. Law enforcement has adopted it for everything from enforcing no-fly lists to bolstering security at the Super Bowl.
Most of us trust AI to unlock our phones (a simple, low-stakes task), but if AI violates consumer privacy, expect lawsuits to follow. For instance, Facebook was recently sued over how it identifies people in photographs uploaded to the site.
The class-action lawsuit alleged that Facebook’s Tag Suggestions tool, which scans photos and offers suggestions about each person’s identity, collected and stored biometric data without user consent, violating the Illinois Biometric Information Privacy Act. Facebook recently agreed to pay $550 million to settle the suit.
With consumer privacy laws on the rise globally (e.g., the GDPR in the EU) and within the U.S. on the state level (e.g., California’s CCPA), lawsuits such as the Facebook suit should be regarded as canaries in the coal mine. Expect more to come if AI doesn’t evolve in ways that protect the public interest.
The Majority Of Americans Believe AI Should Be ‘Carefully Managed’
Luckily for the politicians who will be responsible for crafting new AI regulations, both large corporations and the public at large believe AI regulations are past due. Corporate leaders at Google (gated link), Tesla and Microsoft, to name only a few, are speaking out about the need to regulate AI.
Public opinion aligns with these business leaders. A recent survey by the Center for the Governance of AI found that the vast majority of Americans (82%) believe that AI is a technology that “should be carefully managed.” Even the Catholic Church has chimed in on the subject, arguing that governments and businesses should create ethical standards around AI that “protects people.” IBM and Microsoft both signed on in support of the church’s proposal.
However, consumer attitudes about AI are less uniform when you dig into the tabular data. Pew Research found that while the general public supports law enforcement’s use of facial recognition, the support drops among minority groups.
Moreover, while 59% of those polled believe it’s appropriate for law enforcement to use facial recognition, only 30% believe it’s acceptable for companies to use facial recognition to track employee attendance, and only 15% believe that advertisers should be able to deploy the technology to track how consumers respond to advertisements.
With so much confusion around AI’s legitimate usage, businesses planning to deploy it would be wise to heed the warnings of experts. Fortunately, international cooperation on the issue is already starting to pull experts together to tackle the problem.
France, Canada and the Organization for Economic Co-operation and Development (OECD) have formed a Global Partnership on AI (GPAI) to collaborate on ways to manage AI’s impacts on society.
The U.S. is the only G7 nation that has not signed on to the GPAI (along the lines of what I published in my previous column), with the current administration preferring to let AI developers regulate themselves.
Experts And AI Must Work Together To Mitigate Risks
The U.S. and other governments should join the GPAI and similar partnerships to start collaborating on AI frameworks that will proactively meet future challenges created by this powerful new technology.
The most promising AI framework to date is the expert-in-the-loop (EITL) concept, an approach to AI and machine learning that places subject matter experts at key supervisory points within the AI decision-making workflow.
AI is trusted to handle those chores that are difficult for humans to accomplish, such as processing vast amounts of information or examining very large datasets. Often, the AI algorithm can also manage the next level of analysis, seeking out patterns, cross-referencing information against existing databases and even calculating risks based on sophisticated statistical models.
Then, in the EITL model, the insights that AI tools generate are handed off to experts to verify accuracy and to conduct higher-level analysis that AI can’t, and probably shouldn’t, conduct, such as having the final say on verifying the identities of those flagged by the AI as being on no-fly lists.
A major benefit of EITL is that collaboration limits errors, mitigates risks and provides greater transparency into AI-based judgements and decisions. The chances of both the AI-based algorithm and the individual experts making the same mistake on the same decision is significantly lower than if each operated alone. EITL reduces dangerous situations and provides more oversight over AI.
In the absence of expert-driven checks and balances, organizations will be left with no way to verify or influence decisions made by AI systems. Tech leaders agree that we need sensible, flexible, field-tested templates and laws to guide AI development and deployment. We’d be wise to start listening to them.