AI (Artificial Intelligence) governance is about evaluating and monitoring algorithms for effectiveness, risk, bias and ROI (Return On Investment). But there is a problem: Often not enough attention is paid to this part of the AI process.
“AI projects are rarely coordinated across a company and data science teams are often isolated from application development,” said Mike Beckley, who is the CTO of Appian. “And now regulators are starting to ask questions businesses don’t now how to answer.”
Keep in mind that AI introduces unique problems. Training data is often flawed, such as with errors, duplications and even bias. Then there is the issue with model drift. This is when the AI degrades over time because the algorithms and data do not adequately reflect the changes in the real world.
The result is that a company may make bad decisions or miss revenue opportunities. Even worse, there is the potential for the AI to be unfair or discriminatory.
OK then, what about software tools to help with these problems? Can AI governance be automated? Well, this is an area of technology that is in the nascent stages.
This means that AI governance requires a hands-on approach. “It’s about managing processes and people to get the best results,” said Kenn So, who is a venture capitalist at Shasta Ventures.
So what are some best practices to consider? What can be done to put together a good framework for AI governance? Interestingly enough, if you already have a data policy in place, then you have a head start.
“The relationship between data and AI is so close,” said Wilson Pang, who is the Chief Technology Officer of Appen. “What data do you have? Where is it coming from? How is the data being altered? By whom?”
But of course, there are other things to think about. First of all, it’s important to note that data scientists have different approaches and skillsets than application developers. This can easily lead to a breakdown in communications. In other words, there needs to be clear-cut requirements and principles.
Next, you should put together an AI governance plan. “You need this before you send a machine learning algorithm into the wild, whether it be software for image analysis, a recommendation engine, or a voice-enable commerce bot,” said Rachel Roumeliotis, who is the Vice President of Content Strategy for O’Reilly. “This is important not just for extra regulated industries like finance and banking and healthcare, but just makes sense if you are making decisions based on an algorithm’s output that will affect your company and clients. A plan should be straightforward and actionable, including concerns of all stakeholders. That plan needs to be owned by a person and team and reviewed periodically. AI/Data engineers and operations engineers will need to refer to this plan for all projects.”
But an AI governance plan does not necessarily need to be comprehensive. “Adding AI governance to the process does add a layer of complexity that is not always needed,” said Matthew Emerick, who is a senior consultant at Accenture. “If AI is not a prominent part of a business, then a simple set of guidelines can allow the development team to stay agile while using best practices. On the other hand, if the business is planning on using AI in multiple areas, then an AI Center of Excellence might be recommended to centralize all AI efforts for consistency and completeness.”
Although, the most complex part of the process is how to set the objectives of the plan. This can be challenging because concepts like ethics, fairness, explainability and transparency are amorphous. What’s more, each industry has its own nuances, as there is no one-size-fits all when it comes to AI governance. Because of this, there should be enough time spent on putting together the plan. There also should be inclusion across the organization.
“In general, there is agreement that transparency is the key item in AI governance— knowing how and why an AI made a decision can help humans reason about whether or not that decision should have been made,” said Erick Galinkin, who is the Principal Artificial Intelligence Researcher at Rapid7. “Additionally, fairness and ethics are high on many lists—ensuring that the societal biases or unseen prejudices in data are not reproduced or accelerated by integration with artificial intelligence. Accountability—that is, having an individual who is responsible for the decisions made by the algorithm—is a principle championed by organizations like Rapid7, Microsoft, the Partnership on AI, and the Montreal AI Ethics Institute, but does not have the same purchase with all businesses and governments who leverage AI.”
Regardless, the key is to not wait. For the most part, AI governance really needs to be well-thought out before an AI project is undertaken.
Note: On October 14th, I will be a speaker for a presentation about AI governance for the Train AI Summit.