The future of AI regulation

More and more people have started to pay attention to artificial intelligence (AI) in recent years.

According to Edelman’s special report on tech within its annual Trust Barometer report, people around the world have shown concern that AI and robots could replace human workers. As a result, fewer people are willing to share their personal data, as their trust in the media, online social platforms and search engines seems to have declined.

Some say the chasm between trust and technology has formed for good reasons: For most of AI’s existence, there hasn’t been much regulation around it. At times, the rules may seem a bit loose and opaque for just how world-changing it could be. Without guardrails to help us understand AI’s role in our personal lives, it’s easy to jump to conclusions.

Governments and organizations around the globe are responding by developing AI regulations to establish guidelines in their jurisdictions around technology — and most importantly, around the elements of inclusivity and fairness, data privacy and transparency. In fact, Deloitte predicts 2022 will bring a host of new regulatory action to the AI industry.

This movement has already started to take shape. Led by the General Data Protection Regulation (GDPR) in Europe, which took effect in May 2018, the California Consumer Privacy Act (CCPA) in the U.S., the Act on the Personal Protection of Information (APPI) in Japan and Brazil’s General Data Protection Law (LGPD), there are now data privacy laws in nearly 130 countries. There are far fewer AI-centered laws, though the EU is currently working on proposed regulations that bring a “balanced approach” to the perils and possibilities of AI.

Arguably, though, a patchwork of different laws throughout the world aren’t enough, particularly when we consider the international scope of tech and AI in business. As Paul Hecht, senior product marketing manager of AI Data Solutions at TELUS International notes, businesses have a significant responsibility in answering for the impact of AI, particularly when it comes to the existence and perception of fairness. “I think there is some justifiable apprehension,” says Hecht.

“At the extreme end of the spectrum, there’s the ‘robots are going to take over the world’ viewpoint, which is inaccurate — but, there need to be checkpoints,” Hecht continues. “Because AI and robots can only perform as well as the data that’s feeding the machine learning models, universal guidelines will lead to increased public confidence that the uses of AI are for the greater good.”

The genesis of artificial intelligence law

In Forrester’s predictions for AI in 2022, the research firm anticipates one in five companies will dig in deeper and faster on AI, putting it at the core of its systems.

Historically, the law has been no match for the swiftness with which AI creation and adoption operates. Industry has traditionally been the leader here; tech start-ups introduce new concepts or paradigms, and the government follows.

While digital privacy laws have been around for a while, the regulation of artificial intelligence is much newer. By and large, the typical approach to date has been one of self-regulation; essentially, companies and other organizations have been free to do what they please regarding AI, so long as it doesn’t contravene existing criminal or civil laws.

Some regulatory bodies, such as the Federal Trade Commission (FTC) in the U.S., have continued to facilitate this approach — with a caveat. In April of 2021, the FTC published a blog post summarizing its compliance expectations regarding truth, fairness and equity in AI, and it came with a warning: “If you don’t hold yourself accountable, the FTC may do it for you.”

In the European Union’s case, says Hecht, the commission evaluating AI legislation may choose to emulate GDPR’s method of scaling risk. “The framework of the risk guidelines — from no risk to unacceptable risk — is something that people can understand,” he says, adding that it helps both companies and citizens understand their exposure to risk on a spectrum. For instance, such a framework could deem something like unauthorized facial scanning an unacceptable risk to society.

Reducing bias in AI requires intention

A major reason why governments are seeking to regulate AI is because of its enormous potential for bias. As we know, AI models are only as diversified as the human sources inputting data.

If AI’s human counterparts are too homogenous as a group, organizations run a real and potentially devastating risk of introducing and reinforcing racial, cultural, socioeconomic and gender biases into AI models and algorithms. Imagine an alternative lending start-up whose algorithm routinely denies mortgage loans to people of color, or a healthcare company whose symptom checker doesn’t account for diseases that disproportionately affect women.

Hecht says sourcing data and AI training staff such as data annotators and gatherers from as far afield as possible is critical to rooting out bias. “For example, TELUS International works with our partners from a solutions standpoint, to ensure that we gather data and source our AI Community members from as diverse a footprint as possible,” says Hecht. It isn’t always easy to find people who, for instance, speak a rare dialect from a remote corner of the world — but it is worth it.

“All the training data that we deliver has human quality standard checkpoints built in,” he notes. Consistency is achieved by continuously examining the data and running training programs to ensure as many gaps as possible are filled.

This is a movement far beyond tokenism. Intentional inclusion is essential to reducing bias in AI and ensuring that algorithms do not perpetuate the same kinds of discrimination that have marked our world to date.

Businesses are active partners in AI rule-making

Across different parts of the world, AI legislation is coming. As we’ve seen with past tech-facing regulations, governments don’t always get it 100% right the first time — which means two things. One, like tech itself, AI legislation will have to be iterative to remain relevant and enforceable. Two, governments will need to partner with corporations and researchers — and, ideally, each other — to understand the true scope of regulatory needs required.

To create a successful ecosystem, companies have a duty to be forward-thinking and proactive when it comes to protecting individuals’ privacy. For instance, TELUS International’s Ground Truth Studio, a proprietary AI training data platform, uses an image anonymizer that blurs out license plates and faces in photos.

Investing in reducing bias is also a critical area for corporate social responsibility. Right now, organizations have the opportunity to pioneer new ways of fostering inclusivity, fairness and truth in AI and throughout our technology systems. These societal values are possible to achieve through artificial intelligence; it just takes intention, creativity and, more than anything, the human element.

Restoring a sense of balance to technology may be just what we need to restore citizens’ trust in it, too.

 

Original post: https://www.telusinternational.com/articles/the-future-of-ai-regulation

Leave a Reply

Your email address will not be published. Required fields are marked *