I have been a data analytics professional for the past twelve years. Throughout my career, I have seen a steady spike in the use of data across the industry, be it engineering, education, healthcare or financial services. It was in 2017 when I read about the Economist article “The world’s most valuable resource is no longer oil, but data” an idea which was first coined by Clive Humby, UK Mathematician and architect of Tesco’s Clubcard in 2006. Many prominent personalities like Meglena Kuneva, European Consumer Commissioner, 2009  later reiterated this. I could see everyone talking about the infinite potential of data and how to use it in a million ways. Companies like Google, Amazon, Uber, Facebook were at the forefront utilising, their vast amount of user data to enhance the customer experience as well as improve their business utilising artificial intelligence (AI). However, it was during the US presidential election of 2016 and associated Cambridge Analytica scandal, I realised the dark side of data and AI.
I particularly liked the example quoted in the “The Ethics of Artificial Intelligence” from Machine Learning Research Institute  where an algorithm behind a bank’s mortgage approval system favoured white applicants over black applicants, though unintentionally. Even though the algorithm did not use the race as a deciding factor, it used the geographical information of the applicants and the black applicants were mainly born or previously resided in predominantly poverty-stricken areas. As a regular user of machine learning algorithms, I am familiar with an array of machine learning algorithms. Thus, I realised that if they used a complicated neural network-based machine learning algorithm, then it may prove almost impossible to understand how or even why, the algorithm is judging applicants based on their race. On the other hand, an algorithm based on decision trees or Bayesian networks is much more transparent to programmer inspection .
Due to its profound social implications, many organisations and governments are concerned about the ethical implications of AI. The European Commission has formed a High-Level Expert Group on AI comprising representatives from academia, civil society, industry, as well as a European AI Alliance. It is a forum engaged in a broad and open discussion on all aspects of AI development and its impacts. Furthermore, many countries initiated a reflection on their ethical and political orientation towards AI, like the ones listed below.
Responsible Computer Science Challenge: An initiative designed to improve ethics education in global undergraduate computer science programs. It hopes to educate “a new wave of engineers who bring holistic thinking to the design of technology products.”
Institute for Ethical AI & Machine Learning: A UK-based global research centre dedicated to some of the most cutting-edge technical research on ethical AI development. Main focus includes research into ethical processes, frameworks, operations and deployment. It is staffed by volunteer teams of data science & machine learning experts, who partner with academics, industry experts, and policy-makers on research projects.
Berkman Klein Center: A Harvard based hub for academic research and enquiry into the intersection between the society & the internet. It’s research cover emerging technologies, including AI & innovation. The primary mission is to educate stakeholder both internal and external and help the public ‘turn education to action’. It also has crucial projects dedicated to autonomous vehicle safety, effective AI education and the impact of AI on media.
Open Robo Ethics Institute: A non-profit think tank dedicated to exploring the potential ethical & societal impacts of AI innovation. Its primary goal is to improve the knowledge of AI ethics among technologists, business leaders and regulators. Its vital research includes human-AI topics, including ethical AI for senior healthcare.
From my readings, I realised that there are some fundamental ethical aspects of AI, which I have listed below.
- Transparency & explainability
- Privacy protection and security
- Human-centred values
Based on the above aspects, I believe that the following steps will help the organisations to fully utilise the potential of AI without compromising on the ethical side.
Setting up Ethics Team: Every organisation who use AI should set up an AI ethics team who are technically capable as well to liaison with data analytics team. They should be consulted at the discussion stage of an AI project itself to ensure that the project does not violate the AI ethics principles. Involving them throughout the lifecycle of a project will ensure that the analytics team adhere to the AI ethics principles like not using certain predictor variables which might potentially cause discrimination.
Develop an AI Code of Ethics: As an analytics professional, one should consider questions like
How AI is used in my current job function, and how does AI help me achieve that?
What consent do I need (from customers, employees, etc.) around that data?
What third parties will be handling sensitive data and for what purpose?
Does the purpose align with the organisation’s core mission and values?
AI Ethics Training: We need to make sure that all related staff are trained on how to uphold our AI ethical commitments utilising employee onboarding and workshops, ethics training modules, toolkits & seminars. Training can be done by internal AI ethics team or by an external party based on the requirement levels. There should also be refresher sessions from time to time to accommodate the latest practices.
Cross-Industry Workshops: As we navigate this evolving world of ethical AI, it will become more critical than ever to share practices and identify what we have learned along the way. Such workshops would enable us to hear from others on what approaches have been useful for scaling and implementation and encourage others to share their best practices for championing responsible and ethical technology. The pursuit of responsible, ethical artificial intelligence and technology is critical — and is more significant than any single company or organisation.
Public Engagement: I feel that AI researchers are overwhelming male and are also likely to be from predominantly academic or scientific backgrounds. Theories of collective intelligence and cognitive diversity show that more diverse groups are better at solving problems. This lack of diversity also means that AI researchers often focus on solving the problems of people like them and not of the wider public. Artificial Intelligence holds many promises, but its exclusivity may be holding it back. Constant interactions with the public and understanding their views and requirements on AI will help the AI team to alleviate the public’s misconceptions like job losses or loss of privacy, and also think from different perspectives.
In this article, I have briefly described the history, current state and future aspect of AI ethics from my perspective as a data analytics professional. With the advent of GDPR and related similar initiatives, I am sure we will hear more and more on AI ethics in the near future.
 Meglena Kuneva, 2009, ‘Keynote Speech — Roundtable on Online Data Collection, Targeting and Profiling’ https://ec.europa.eu/commission/presscorner/detail/en/SPEECH_09_156
 Nick Bostorm & Eliezer Yudkowsky, ‘The Ethics of Artificial Intelligence’ https://intelligence.org/files/EthicsofAI.pdf
 Trevor Hastie & Robert Tibshirani, The Elements of Statistical Learning https://web.stanford.edu/~hastie/Papers/ESLII.pdf