It’s no secret that AI has a trust problem. Part of the problem is right in the name: The word “artificial” has baggage. Just think of artificial colors, dyes, sweeteners, plants, light. Our trust issues with AI go way beyond naming, though. There’s been too much hype about the technological wizardry of AI and how it’s going to change everything for business and too little attention on how people actually make decisions. So much of human trust is based on our actual experience of how other people think and how reliable they are. But we’ve been asking people to take AI tech on faith.
If we want organizations to make decisions based on predictive analytics that have huge implications for their business, public safety or someone’s health, AI has to deliver more than accuracy. We need to provide people up and down the org chart continual proof that its conclusions are trustworthy. To make business decision-makers feel confident about AI, we have to make these technologies more understandable, provide clear information about how AI is arriving at its decisions and show user-friendly direction on what steps should be taken.
Here are three core elements of trustworthy AI and tips on how to bridge the gap between data science and business users.
1. Ability to show relationships within data with better, AI-generated visuals
Most data problems are complex and involve a lot of different dimensions. It’s not unusual for data problems to have 50 to 100 metrics, but most can be boiled down to five to 10 variables that really matter for what you’re trying to predict. In this instance, engine failure or other problems that result in unscheduled repairs.
Working with even this many metrics gets complex and time-consuming. For example, if a complex problem boils down to just 10 variables, how would you visualize the relationship between these 10 variables and the target of interest? How would you visualize the relationships between these 10 variables?
If you want to visualize these variables in pairs using traditional 2D visualizations, you would have to view 45 different plots (there are 45 possible pairs within 10 variables), which makes it hard for the human mind to stitch together a meaningful picture. By the way, if we had to view the relationships between all the pairs within 100 variables using traditional 2D plots, we would have to view 4,950 plots!
The AI finds the key relationships in the data and figures out the best way to represent those relationships visually. And in 3D, nuances become clear through size, color, transparency and clusters of communities within your dataset. In 2D, these nuances would be lost. You don’t have to leave some metrics out because they won’t fit on an X/Y axis or it makes for a cleaner presentation (that’s a surefire way to oversimplify and miss something important or unexpected). Instead, you’re allowing the AI to find the relationships and display them in a way that helps you understand all your key data, using advanced 3D graphs.
2. The AI has to explain what the AI is doing and why
The built-in AI algorithms have to generate explanations, or annotations, in plain English that provide step-by-step explanations of what the AI has found in the data. Explainability is part of the trust gap in advanced analytics. Explainable AI is when you can clearly describe the AI model being used, potential biases within it and how it shapes the data behind a recommended course of action. It identifies features that are important in a decision and says exactly why the algorithm thought those features were important, automatically.
When users have this transparency during data pre-processing, exploration, prediction and prescription, they can fully understand what’s going into AI-based recommendations. They’re told the probability of the desired result and how confident the software is in its prediction. They can see if there’s bad, missing, skewed or outdated data.
It’s important for everyone from subject matter experts to data scientists to the legal department to the C-suite to be able to understand your AI model, to assure it’s relevant to the real world and is being applied as intended.
3. Make it easy to make decisions and take action
Back up the AI-driven recommendations with clear explanations, using plain language. It’s not enough to provide generalized recommendations; managers want specific evidence. Today’s AI solutions will automatically pick which graph tells this story and provide a short narrative to go with it, in language that doesn’t require a math degree to understand.
AI should also allow end users to easily run different scenarios to help determine the right course of action and what to do next.
And an AI platform has to connect seamlessly with other systems and become part of everyday workflows. This is how a modern, interactive AI platform makes data science actionable.
There’s a lot of wow factor with artificial intelligence. Platforms are becoming virtual reality-capable, allowing remote users to stand inside and touch a dataset and collaborate with others in a virtual room. They can handle myriad types of data with no code: numerical, categorical (like gender, state of residence, education level) and unstructured (emails, PowerPoints, survey responses, social media posts, call center transcripts, etc.). The business impact can be huge.
But most AI projects fail. AI doesn’t really take off in organizations until it’s made truly accessible to those beyond the data science department. Research has found that giving people control over algorithms, by letting them slightly modify them, can help create more trust in AI predictions—and make users more likely to use them in the future.
So if you want AI to get traction, give key stakeholders the opportunity to infuse AI insights with their own areas of expertise. Give more people the opportunity to collaborate with other team members on complex data problems, using the tools that make this feasible. By letting everyone see what’s behind AI at every stage of its life cycle, you’ll have an AI model people can trust.