Regulating Artificial Intelligence: Why We Need Expert Input To Limit Risks

When science fiction writer Isaac Asimov introduced the Three Laws of Robotics to the world in 1942, practical robotic applications such as industrial pneumatic arms, all-transistor calculators and even the term “artificial intelligence” itself were all still a decade or two in the future.

Asimov’s laws boil down to three simple maxims: protect humans; obey humans; if it doesn’t violate rule one or two, protect itself. Seems simple and sensible enough, yet the limits and internal tensions of these basic laws have inspired writers to dream up a wide range of science fiction dystopias, from 2001 to Blade Runner to the Terminator. And let’s not forget to add Asimov’s own collection of stories, I, Robot, which features the Three Laws, to the list.

For business leaders, ushering in an AI-driven global calamity isn’t a top-of-mind concern, but even avoiding smaller risks can be a major challenge. Businesses must figure out how to deploy AI in a way that does not harm consumers, violate consumers’ privacy or otherwise run afoul of laws. If they can’t, it could trigger massive lawsuits.

Facial Recognition: An AI Success Story Or A Cautionary Tale?

Consider an AI application that is beginning to touch all our lives: facial recognition. Consumers can use it to sort digital photos and open the lock screens on their mobile phones. Law enforcement has adopted it for everything from enforcing no-fly lists to bolstering security at the Super Bowl.

Most of us trust AI to unlock our phones (a simple, low-stakes task), but if AI violates consumer privacy, expect lawsuits to follow. For instance, Facebook was recently sued over how it identifies people in photographs uploaded to the site.

The class-action lawsuit alleged that Facebook’s Tag Suggestions tool, which scans photos and offers suggestions about each person’s identity, collected and stored biometric data without user consent, violating the Illinois Biometric Information Privacy Act. Facebook recently agreed to pay $550 million to settle the suit.

With consumer privacy laws on the rise globally (e.g., the GDPR in the EU) and within the U.S. on the state level (e.g., California’s CCPA), lawsuits such as the Facebook suit should be regarded as canaries in the coal mine. Expect more to come if AI doesn’t evolve in ways that protect the public interest.

The Majority Of Americans Believe AI Should Be ‘Carefully Managed’

Luckily for the politicians who will be responsible for crafting new AI regulations, both large corporations and the public at large believe AI regulations are past due. Corporate leaders at Google (gated link), Tesla and Microsoft, to name only a few, are speaking out about the need to regulate AI.

Public opinion aligns with these business leaders. A recent survey by the Center for the Governance of AI found that the vast majority of Americans (82%) believe that AI is a technology that “should be carefully managed.” Even the Catholic Church has chimed in on the subject, arguing that governments and businesses should create ethical standards around AI that “protects people.” IBM and Microsoft both signed on in support of the church’s proposal.

However, consumer attitudes about AI are less uniform when you dig into the tabular data. Pew Research found that while the general public supports law enforcement’s use of facial recognition, the support drops among minority groups.

Moreover, while 59% of those polled believe it’s appropriate for law enforcement to use facial recognition, only 30% believe it’s acceptable for companies to use facial recognition to track employee attendance, and only 15% believe that advertisers should be able to deploy the technology to track how consumers respond to advertisements.

With so much confusion around AI’s legitimate usage, businesses planning to deploy it would be wise to heed the warnings of experts. Fortunately, international cooperation on the issue is already starting to pull experts together to tackle the problem.

France, Canada and the Organization for Economic Co-operation and Development (OECD) have formed a Global Partnership on AI (GPAI) to collaborate on ways to manage AI’s impacts on society.

The U.S. is the only G7 nation that has not signed on to the GPAI (along the lines of what I published in my previous column), with the current administration preferring to let AI developers regulate themselves.

Experts And AI Must Work Together To Mitigate Risks

The U.S. and other governments should join the GPAI and similar partnerships to start collaborating on AI frameworks that will proactively meet future challenges created by this powerful new technology.

The most promising AI framework to date is the expert-in-the-loop (EITL) concept, an approach to AI and machine learning that places subject matter experts at key supervisory points within the AI decision-making workflow.

AI is trusted to handle those chores that are difficult for humans to accomplish, such as processing vast amounts of information or examining very large datasets. Often, the AI algorithm can also manage the next level of analysis, seeking out patterns, cross-referencing information against existing databases and even calculating risks based on sophisticated statistical models.

Then, in the EITL model, the insights that AI tools generate are handed off to experts to verify accuracy and to conduct higher-level analysis that AI can’t, and probably shouldn’t, conduct, such as having the final say on verifying the identities of those flagged by the AI as being on no-fly lists.

A major benefit of EITL is that collaboration limits errors, mitigates risks and provides greater transparency into AI-based judgements and decisions. The chances of both the AI-based algorithm and the individual experts making the same mistake on the same decision is significantly lower than if each operated alone. EITL reduces dangerous situations and provides more oversight over AI. 

In the absence of expert-driven checks and balances, organizations will be left with no way to verify or influence decisions made by AI systems. Tech leaders agree that we need sensible, flexible, field-tested templates and laws to guide AI development and deployment. We’d be wise to start listening to them.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Original post:

24 comentários em “Regulating Artificial Intelligence: Why We Need Expert Input To Limit Risks

  1. Very great post. I just stumbled upon your weblog and wanted to
    mention that I have really enjoyed surfing around your blog posts.

    In any case I’ll be subscribing on your feed and I’m hoping you write
    again very soon!

  2. Heya exceptional blog! Does running a blog like this take a lot
    of work? I’ve no expertise in computer programming
    but I had been hoping to start my own blog in the near
    future. Anyhow, if you have any suggestions or
    tips for new blog owners please share. I understand this is off topic nevertheless I just had to ask.

  3. My brother suggested I might like this blog.
    He was entirely right. This post truly made my day.
    You can not imagine just how much time I had spent for this information! Thanks!

  4. If some one needs to be updated with most recent technologies after that he must be pay a visit this web site and be up to date daily.

  5. Asking questions are actually good thing if you are not understanding something completely, but this piece
    of writing provides nice understanding yet.

  6. Fantastic web site. Lots of helpful information here.

    I’m sending it to several friends ans additionally sharing in delicious.
    And naturally, thanks in your sweat!

  7. Undeniably believe that that you said. Your favourite justification seemed to be at the internet
    the simplest thing to have in mind of. I say to you, I certainly get irked even as other people consider concerns that they plainly
    don’t understand about. You controlled to hit
    the nail upon the top and also outlined out the whole thing without having side effect , other
    people can take a signal. Will probably be back to get more.
    Thank you

  8. Pretty nice post. I just stumbled upon your weblog and wished
    to say that I’ve truly enjoyed browsing your blog posts.
    In any case I will be subscribing to your rss feed and I hope you write again soon!

  9. Please let me know if you’re looking for a article author for your weblog.
    You have some really great posts and I believe I would be a good asset.
    If you ever want to take some of the load off,
    I’d absolutely love to write some articles for your blog in exchange for a link back to mine.
    Please shoot me an email if interested. Many

  10. With havin so much written content do you ever run into any issues of plagorism or copyright
    violation? My blog has a lot of unique content I’ve either created myself or outsourced but it seems a lot of it is popping it up all over the internet
    without my permission. Do you know any methods to help stop content from being stolen? I’d definitely appreciate it.

  11. Excellent article. Keep posting such kind of info on your blog.
    Im really impressed by your blog.
    Hi there, You’ve performed a great job. I will certainly digg it and for my part suggest to
    my friends. I am confident they’ll be benefited from this web site.

  12. Spot on with this write-up, I absolutely believe that
    this website needs a lot more attention. I’ll probably be returning to see
    more, thanks for the information!

  13. Today, while I was at work, my cousin stole my iPad and tested to see if it
    can survive a forty foot drop, just so she can be a youtube sensation. My
    iPad is now destroyed and she has 83 views. I know this is entirely
    off topic but I had to share it with someone!

  14. I just want to say I’m all new to blogs and actually savored your web page. More than likely I’m planning to bookmark your site . You certainly come with beneficial stories. Kudos for sharing your website page.

Leave a Reply

Your email address will not be published. Required fields are marked *