Ethical AI, in simple words, is about ensuring your AI models are fair, ethical, and unbiased.
So how does bias get into the model? Let’s assume you are building an AI model that provides salary suggestions for new hires. As part of building the model, you have taken gender as one of the features to suggest salary. The model is trying to discriminate salary based on gender. In the past, this bias went through human judgments and various social and economic factors (https://en.wikipedia.org/wiki/Gender_pay_gap), but if you include this bias as part of the new model, it’s a recipe for disaster. The whole idea is to build a model that is not biased and suggests salary based on people’s experiences and merits.
Take another example of an application providing restaurant recommendations to a user and allowing a user to book a table. The AI application is designed to look at the amount spent in previous transactions and ratings of restaurants (along with other features), and the AI system starts recommending restaurants that are more expensive. Even though there might be good restaurants in the vicinity that are less costly, those restaurants may not show up as one of the top recommendations. Also, the amount spent by the user implies more revenue for the restaurant application. So in short, you are steering a class of user towards spending more on high-end restaurants without the user knowing about it. Does this classify as a bias or a smart revenue-generating scheme?
You might also want to read: Guidelines for Creating an Ethical AI Framework
Ethical AI is a great topic for research and debate as you would see a lot of development (as well as usual marketing buzzwords) and governance in this area.
So how do you ensure your model is ethical and validate it?
- Designing the model without bias — Ensure you don’t include the features that can make your model biased. For instance, don’t include gender while predicting the salary packages. Take time to validate the data sources and features being used to build the model.
- Explain the model output — Designing applications with explainability in mind should be a key design principle. If the user receives an output from an AI algorithm, providing why an output was presented and how relevant it is should be built into the algorithm. This should empower users to understand why particular information is being presented and turn on/off any preferences associated with an AI algorithm for future recommendations/suggestions.
- Validate the model — Validate the model with enough test cases. You will also see a lot of offerings (the Ethical AI services) crop up in the future around this area. Again, the key is that offerings/services need to be a vertical focus rather than pure-play horizontal AI services (else it would end up like chatbot hype – https://navveenbalani.dev/index.php/articles/ai-chatbots-reality-vs-hype/).
- Accountability — Ultimately, humans need to look at the output from the AI system and take corrective action for critical tasks. I don’t see machines taking over human intelligence for critical tasks in the future. For instance, a cancer treatment option given by an AI system needs to be carefully investigated by the doctor, but a fashion website recommending wrong products for a user is not critical and can be corrected later through feedback.
Going back to the restaurant application, if we design the application with the above guidelines in mind and make the output explainable to the user, we can, at minimum, have 4 levels of recommendations (shown as tiles in application) along with evidence on why a recommendation is being provided:
- Recommending restaurants based on earlier restaurant spendings, ratings, history, and preferences of the user
- Recommending similar restaurants that are highly rated and less costly based on ratings, history, and preferences of the user
- Recommending new restaurants based on user history and preferences of the user
- Recommendations generated by the system without applying any user preferences
The revised application now provides various recommendations and enough evidence to back up the recommendation and ultimately the choice if left to the user to pick up the restaurant and book a table.
The above was an example of a very simple application, but imagine when AI is deployed across industries and in government agencies. Developing and monitoring the AI system for ethical principles would be extremely critical. Both the creators of the model as well as validators (agencies/third party systems etc) validating the model would be critical to ensure AI models are fair, ethical, and unbiased.
As we are creators and validators of AI systems, the onus lies on us (humans) to ensure technology is used for good.