The rise of AI and digital technologies is enabling many capabilities, disrupting several business models and changing the way we live and work. At the same time, this technological shift is also giving rise to many concerns around ethics, privacy, security and the future of humanity.
The notion of ethics has evolved. Decisions around right and wrong always depended on human cognition and were guided by popular sentiments and socially acceptable norms. Now, with the rise of AI, machines are slowly taking over human cognition functions, a phenomenon that author Ray Kurzweil predicts will increase over time and culminate in the advent of singularity where machines irrevocably take over humans, possibly at some distant point in the future. This trend is causing technologists, researchers, policymakers and society at large to rethink how we interpret and implement ethics in the age of AI.
There are several dimensions of ethics and AI:
1. Equitableness. AI works on analytical models, which depend on training data. Inherent biases in training data lead to predisposition in the insights and recommendations generated by AI programs. Racial bias in crime detection and the regional bias in language interpretation are some examples of this problem. There is fear that AI may disadvantage large sections of societies due to such biases.
2. Inclusiveness. When AI takes center stage in decision making — especially when it comes to services or access to resources for the population at large — ensuring inclusiveness is equally important to fairness. For this to happen, the needs and context of all sections of society should be included in the logic tree. The datasets may not always support this, but human intervention has to drive inclusiveness.
3. Security. AI is increasingly used in national security and weapon systems. Today, there are several manual steps and human intervention points to ensure that security systems are not invoked erroneously or under circumstances that might lead to the point of no return. As technology gets more mature and takes over more decision-making functions, there is a genuine fear that the machines, programmed to achieve the best outcome in terms of winning a conflict, might act against the interests of peace and humanity. At the same time, the higher technology content and usage of AI makes it easier for people to cause more damage if they can breach the systems.
4. Reformation of employment. AI is often accompanied by extensive automation. This shifts the capacity and capability requirements with a deep impact on workforce needs. While new economic opportunities are created, old job functions are disappearing, requiring retraining for employability in the new world order.
5. Wealth. Historically, in most capitalistic constructs, wealth creation was tied to economic contribution or participation in the decision-making process. Now AI and digital technologies are impacting both significantly, causing a shift in how and where wealth gets created.
One of the disciplines being explored to deal with these concerns is metaethics, a philosophical study of the nature and drivers of morality or ethics. Metaethics does not provide any judgment on whether something is right or wrong, which is the domain of applied ethics, nor does it frame the criteria for defining what is right or wrong, something done by normative ethics. Instead, metaethics deals with the origins and principles of ethics, the same way metadata describes data. By understanding its evolution, we can create context and come to better conclusions around issues of ethics and take corrective actions when required.
To face the challenges of the future, we also need to develop a new discipline of meta-intelligence by taking inspiration from the concepts of metadata and metaethics. Doing so will help us improve the traceability and trustworthiness of AI-driven insights. The concept of meta-intelligence has been doing the rounds of thought leadership for the last few years, especially led by people thinking about and working on singularity.
The pace of technological evolution and the rise of AI has become essential for human progress today. Businesses around the world are getting impacted by the transformative power of these technologies. It is estimated that in the next decade, more than 70% of the businesses and institutions will be fueled by the oil of AI. In this new world order, metaethics and meta-intelligence might prove to be the best defense for the future of humanity.
With every new major transformation at a societal level driven by technology, there will be new concerns and challenges. Ethics, privacy and cybersecurity are the big rocks for the algorithmic age. Business leaders have a pivotal role in balancing the business benefits from AI and potential exposure created by technology. The three most important steps they can take are:
1. Set up ethics, privacy and cybersecurity as organizational priorities, and establish a dedicated independent function for policy formulation and governance.
2. Review the intersection of AI-driven decision support and ethical considerations to ensure equitableness, inclusiveness, privacy and security.
3. Communicate extensively about the significance of ethics, privacy and security to both internal and external stakeholders.
This is a fast-evolving space, and it will serve business leaders well to personally learn about the progress and guide their organizations through the journey.