- Ethical considerations such as AI bias and algorithmic transparency are already impacting society in an unsavory light, via the technologies we use daily
- Policymakers should explore how to resolve accountability and trust-building issues with AI technology
- There is a need for sound mechanisms that will generate a comprehensive and collectively shared understanding of AI’s nature and development
Looking at how governments worldwide are dealing with tech giants, it becomes clearer that both sides do not necessarily speak the same language. While artificial intelligence (AI) developers have the information and the grasp of the technology, this does not extend to the regulators who sometimes have to police them.
How can one regulate something one does not fully comprehend? It is a pickle, but fortunately the upside is that a consensus has started to form around the impact that AI will have on humankind and civil society at large. In fact, the public and private sectors are stepping up their requests for accountability and trust-building. The World Economic Forum (WEF) in a recent report acknowledged that “the AI integration within industry and society and its impact on human lives, calls for ethical and legal frameworks that will ensure its effective governance, progressing AI social opportunities and mitigating its risks”.
The WEF also reckons that there is a need for sound mechanisms that will generate a comprehensive and collectively shared understanding of AI’s development and deployment cycle. “Thus, at its core, this governance needs to be designed under continuous dialogue utilizing multi-stakeholder and interdisciplinary methodologies and skills,” the report’s author Adriana Bora stated.
The lack of clarity on AI as a technology
On one hand, there is a limited number of policy experts who truly understand the full cycle of AI technology, Bora said, but on the other hand she noted that the technology providers lack clarity, and at times interest, in shaping an AI policy with integrity, by implementing ethics into their technological designs.
If anything, there is a dire need for “ethics literacy” and a “commitment to multidisciplinary research’ from the technology providers’ perspective. “The process of understanding and acknowledging the social and cultural context in which AI technologies are deployed, sometimes with high stakes for humanity, requires patience and time,” Bora said, adding that with increased investments in AI, technology companies are encouraged to identify the ethical consideration relevant to their products and transparently implement solutions before deploying them.
This could theoretically take care of hasty withdrawals and fulsome apologies when AI models behave unethically. Yet the report also noted how policymakers need to step up. “It is only by familiarizing themselves with AI and its potential benefits and risks that policymakers can draft sensible regulation that balances the development of AI within legal and ethical boundaries while leveraging its tremendous potential,” Adrianna noted. She suggests that knowledge building is critical for developing smarter regulations when it comes to AI, for enabling policymakers to engage in dialogue with technology companies on an equal footing, and enabling both sides to collaborate to set a framework of ethics and norms in which AI can innovate safely.
How does the information gap impact small businesses?
As a technology alone, AIs bodes well for small businesses who are looking to implement it as part of their daily operations. Simply put, the more small businesses increase their digital presence, the more benefits of AI will have a pervasive impact on the organization as a whole.
Despite its ethical concerns, AI systems have proven adept at helping small and medium businesses (SMBs) to automate and make internal processes much more efficient, without requiring more manpower. That is a capital concern for a small business owner, but luckily in this information age, there is likely an app or a service that can automate everything from administration to HR functions.
And that’s not all, AI marketing tools use data collection and analysis to make intelligent decisions for lead generation, content creation, and customer personalization. AI-powered chatbots can’t wholly replace live customer agents, but they can cut down on many frequently asked questions and lower the shopping cart abandonment rate on your e-commerce platform. Even the time-consuming and monotonous tasks like bookkeeping can today be enhanced with AI, canceling out the risk of human error in calculations at the same time.
According to the McKinsey Global Institute, AI has the potential to deliver around $13 trillion in additional global economic activity by 2030. McKinsey says the size of the impact will be determined by micro and macro factors as organizations and countries adopted the technology. How public and private sectors go about implementing AI will in great part dictate how far it goes, both for good and evil.
While the “AI race” is on between the world’s economic powers and many countries have launched their own industrial strategies, international collaboration based on shared principles is needed to provide an enabling environment for human-centered AI that promotes innovation and investment. Still, disruptive technology like AI also creates new societal challenges that need to be addressed, including the impact on labor markets, privacy protection, data security, and new ethical questions posed by human-machine relationships.
Decisive action by policy-makers is needed to turn those challenges into opportunities. For example, adapting education systems and curricula will be key both in driving AI adoption and in combating inequality. And high standards of transparency, data protection and cybersecurity will foster public trust and confidence in AI technologies that are necessary to realize their potential.
Original post: https://techhq.com/2021/02/regulators-still-dont-get-ai-technology/