The pace of adoption for AI and cognitive technologies continues unabated with widespread, worldwide, rapid adoption. Adoption of AI by enterprises and organizations continues to grow, as evidenced by a recent survey showing growth across each of the seven patterns of AI. However, with this growth of adoption comes strain as existing regulation and laws struggle to deal with emerging challenges. As a result, governments around the world are moving quickly to ensure that existing laws, regulations, and legal constructs remain relevant in the face of technology change and can deal with new, emerging challenges posed by AI.
Research firm Cognilytica recently published a report on Worldwide AI Laws and Regulations that explores the latest legal and regulatory actions taken by countries around the world across nine different AI-relevant areas. (Disclosure: I’m an analyst with Cognilytica). Specifically, the report analyzed emerging laws and regulations pertaining to the use of facial recognition and computer vision, operation and development of autonomous vehicles, issues of AI-relevant data privacy, challenges arising from conversational systems and chatbots, the emergence of the possibility of lethal autonomous weapons systems (LAWS), concerns around AI ethics and bias, aspects of AI-supported decision making, the potential for malicious use of AI, and other regulations and laws pertaining to the use, creation, or interaction with AI systems.
The emerging state of AI laws
It may not be surprising to find that most governments are adopting a “wait and see” approach to laws and regulations on AI. Just like with any new technological wave it’s hard to predict just how this new technology will be used, or abused. It took many years before laws were put in place to regulate the use of cell phones while driving. That’s because first lawmakers needed to see how the technology was being used, the hazzards distracting driving created, and only then were they able to come up with meaningful laws to regulate the use of the technology. So too is the case with AI. It’s still too early for lawmakers to see just how this technology will impact citizens.
The European Union is the most active in proposing new rules and regulations, with existing or proposed rules in seven out of nine categories of areas where regulation might be applicable to AI. On the other hand, the United States maintains a “light” regulatory posture when it comes to laws around AI.
Autonomous vehicles are starting to make their appearance on the roads. As such, governments and legislative bodies are rapidly facing the need to make sure their traffic laws and other automobile and vehicle-relevant laws and regulations remain relevant. Since autonomous vehicles operate side by side with humans, problems arising from self-driving cars can have deadly consequences. Findings from the report show that twenty four countries and regions have put into place permissive laws for autonomous vehicle operation, and eight more are currently in discussions to enable autonomous vehicles to operate. Many European countries such as Belgium, Estonia, Germany, Finland, Hungary, and others all have laws in place that allow for the testing of autonomous vehicles on their roads. France has expressed the ambition of taking a major role in the development of autonomous vehicles, with an emphasis on safety. Furthermore, some countries such as the United States have a system where the central or federal governments regulate some aspects of vehicles and vehicle operation while state, regional, provincial, or local authorities have the power to regulate other aspects. This results in a checkered legal and regulatory environment.
Other aspects of AI that are getting regulatory attention.
The use and discussion of data goes hand in hand with the conversation on AI. After all, data is what fuels AI. Laws that concern data are highly relevant for AI, since those laws can impact the use and growth of AI systems. According to the report thirty one countries and regions have prohibitive laws in place that restrict the sharing and exchange of data without prior consent or with other restrictions. In 2018, the European Union introduced the General Data Protection Regulation (GDPR), which has in turn obligated member states to maintain a fairly prohibitive regulatory approach for data privacy and usage.
Twenty seven of the countries mentioned in the report are European Union member states, and must comply with GDPR. In addition, the United Kingdom, Brazil, and various states in the United States have enacted restrictive data privacy laws, with the United States considering such rules and regulation at the federal level. In the coming years it should be expected that more countries will create laws to regulate data and we’ll have to see how this plays into the use, and possible regulation of AI.
While the ethical and responsible use of AI continues to be a hot topic for discussion, no country or region has advanced specific legislation or regulation with regards to the ethical use of AI or any issues regarding bias in the application or development of AI systems. The European Union, United Kingdom, Singapore, Australia, and Germany are all actively considering such regulation and have advanced discussions around this topic. However, no countries yet have specific laws in place around ethical and responsible AI. Time will tell whether or not companies will self-monitor or if governments will step in to more formally regulate. And, there is no legislative or regulatory activity with regards to the intentional malicious use of AI. Once the first big incident makes the news we expect to see more discussion and regulation around this.
Quite interestingly, many countries are concerned about the potential use of AI to power autonomous weapons. Thirteen countries have advanced some level of discussion with regard to restrictions on the use of lethal autonomous weapons systems (LAWS) with Belgium already passing legislation to prevent the use or development of LAWS.