Trust Artificial Intelligence? Still A Work In Progress, Survey Shows

Our dependency on AI-based outputs seems to grow every day, both from a business as well as personal perspective. But are we willing to fully trust this output? Are we sure the data fed into these systems is accurate? Are the decision models and algorithms kept up to date? Is it free of bias? Are humans kept in the loop?

The answers to these questions are all still up in the air, according to a recent survey of 7,502 business around the world, commissioned by IBM in partnership with Morning Consult.

AI usage just keeps growing. Right now, 35% of companies use AI in their business — up from 31% a year ago. An additional 42% are exploring AI. There are benefits — such as cost savings and efficiencies (54%), improvements in IT or network performance (53%), and better experiences for customers (48%).

Trust is a priority, but many organizations haven’t taken enough steps to ensure AI is trustworthy, the survey also shows. Eighty-five percent of respondents agree that consumers are more likely to choose a company that’s transparent about how its AI models are built, managed and used. In addition, 84% say that “being able to explain how their AI arrives at different decisions is important to their business.”

Maintaining brand integrity and customer trust is the most important reason to pursue AI trust, cited by 56% of managers. Another 50% say meeting external regulatory and compliance obligations is key, and 48% cite the ability to govern data and AI across the entire lifecycle. Another 48% seek the ability to monitor data and AI across the lifecycle.

A majority of respondents say they lag in many efforts to ensure trust — from finding the right skills to proactive efforts to avoid bias. A majority organizations haven’t taken key steps to ensure their AI is trustworthy and responsible, such as reducing bias (74%), tracking performance variations and model drift (68%), and making sure they can explain AI-powered decisions (61%).

“A significant challenge is that the field of applied AI ethics is still relatively new, and most companies cite a lack of skills and training,” the survey’s authors state. Leading challenges to assuring greater trust in AI include the following:

  • Lack of skills and training to develop and manage trustworthy 63%
  • AI governance/mgmt tools that don’t work across all environments 60%
  • Lack of an AI strategy 59%
  • AI outcomes that aren’t explainable 57%
  • Lack of company guidelines for developing trustworthy, ethical AI 57%
  • AI vendors who don’t include explainability features 57%
  • Lack of regulatory guidance from governments or industry 56%
  • Building models on data that has inherent bias 56%

The good news is the more likely a company is to have deployed AI, the more likely it is to value the importance of trustworthiness, the survey’s authors state. IT professionals at businesses currently deploying AI are 17% more likely to report that their business values AI explainability than those that are simply exploring AI.

The most activity associated with AI trust focuses on safeguarding data privacy, the survey also shows.


Original post:

Leave a Reply

Your email address will not be published. Required fields are marked *