This is the sixth blogpost in a series on Artificial Intelligence and Human Rights from Data & Society.
Artificial intelligence (AI) is increasingly being used in commercial, government, and corporate systems that provide services to as well as monitor billions of people around the globe. As the use of AI systems becomes more widespread, we are beginning to see how this technology may enhance discrimination against marginalized or vulnerable populations. Undoubtedly, AI will affect social and economic inclusion, risking amplification of inequality at all levels. In order to address the emerging challenges in AI, we need to develop solutions from the standpoint of inclusion.
It’s time to take inspiration from inclusive movements — such as disability rights advocacy groups that helped shape norms and rules around building design to ensure more equitable access for citizens of all abilities — to build the AI future we want. This is particularly crucial now as various conversations about AI and inclusion have already started. However, these important conversations run the risk of being shaped by an ‘artificial intelligentsia’ that discusses inclusion without truly including the voices of the marginalized people likely to suffer most significantly in an AI ecosystem that didn’t consider their voice in its design.
It’s time to build ramps for individuals of all backgrounds to enter and shape conversations about AI technology that will affect generations to come.
The resistance and the vanguard
The interaction between technology and society over the past few decades, particularly around the right to privacy, the consequences of the digital divide, and the scope of unchecked electronic mass surveillance by governments, has created a vast fabric of academic and civil society organizations that are skeptical of Silicon Valley’s techno-utopianism.
In the past five years, multiple spaces have emerged to facilitate discussions about AI, from new multi-stakeholder organizations like the Partnership for AI to research centers like AI Now and projects like the Ethics and Governance of Artificial Intelligence Initiative. These diverse actors are researching how AI will impact society and exploring ways to reshape this ecosystem.
Others are introducing public interest concerns into the conversation. Academics and public intellectuals like Virginia Eubanks and Cathy O’Neil have raised awareness around the role AI can play in the exacerbation of social inequalities. The work of these and other vanguards has had significant reach among academia and civil society, culminating in events like the AI for Good Global Summit and Data & Society’s Artificial Intelligence & Human Rights workshop. These debates have rapidly moved toward collective agreements and calls for governance like the participatory Toronto Declaration on “protecting the right to equality and non-discrimination in machine learning systems.”
Openness and transparency facilitate a path for inclusion, as they create opportunities for participation, information, collaboration, and education.
Rather than being limited to policy or techno-philosophical considerations, these conversations have involved sophisticated efforts to address the complex scientific and technical aspects of algorithmic bias. For example, the AI Now symposium brought public attention to the idea of bias, error, and misuse of AI technologies and the ACM FAT* conference is raising awareness around the fairness, accountability, and transparency of algorithmic systems. Projects like the Algorithmic Justice League are creating avenues to collectively identify and resist encoded bias in algorithms that could negatively impact vulnerable populations (for instance, risk scoring tools used by police and lenders).
Openness and transparency facilitate a path for inclusion, as they create opportunities for participation, information, collaboration, and education. Moreover, these efforts are key to ensuring protection from harms and fair access to benefits. Most importantly, they hint at why representatives from different areas of society need to have a role in decisions about both the deployment and regulation of technologies and the bodies of knowledge that affect them.
The “rights” kind of inclusion
Considering AI has the potential to change society at an unprecedented scale, it is important to recognize where marginalized voices, and the Global South generally, are underrepresented in this new ecosystem. Rather than its complex technological nature or possible dystopian futures, the exclusion of these voices is the biggest challenge to a world “bettered” by AI.
AI is a complex topic that goes beyond the images that popular culture has imprinted in our collective imagination. The software, algorithms, training sets, and classifiers behind it inevitably present a high bar for the layperson to understand. In that sense, the sophisticated technical nature of the topic, combined with the size and mandate of the corporate and academic actors behind it, does not offer an easy avenue for the inclusion of the majority, especially with respect to those already marginalized by our societies. From a human rights perspective: it is dismaying that those already marginalized are likely to suffer the most significant consequences in an ecosystem that didn’t consider their voice in its design.
From a human rights perspective: it is dismaying that those already marginalized are likely to suffer the most significant consequences in an ecosystem that didn’t consider their voice in its design.
Conversations about AI and inclusion have already started: Perhaps the most significant has been the Global Symposium on AI and Inclusion, which convened 170 participants from over 40 countries in Rio de Janeiro in November 2017. The event, co-organized by the Institute for Technology and Society of Rio de Janeiro (ITS Rio) and the Berkman Kleiman Center, presents inclusion as a key element in the creation of an ecosystem around AI. Beyond reminding us of the vast impact that AI may have on human life, it also highlighted the time-bound opportunity we have to steer AI in a collectively desirable direction. Moreover, this symposium created space to consider inclusion politics not as a secondary or parallel conversation, but rather as mutually constitutive with AI. Events like this are inspiring as they remind us that we have the power to transform AI into a domain of social inclusion and equality.
AI has the potential to impact every human life both directly and indirectly. So while it makes sense to rely on a policy savvy and technically sophisticated actor to guide conversation when the socialization of AI is still in its infancy, it is also important to include the greatest diversity of voices in this conversation. If we want to avoid recreating existing forms of structural inequality, we should start addressing these inequalities in the very events that are discussing incorporating inclusion in the AI ecosystem.
Paradoxically, the marginalized are largely missing from a conversation that uses them as a justification. Thus far, the approach has been to gather experts to discuss and strategize ways to better implement inclusion in the AI ecosystem without consulting the so-called marginalized groups they seek to protect. In that sense, we must be wary of replicating the exclusionary practices in the international cooperation model and instead, move towards a paradigm that sees technology design, transfer, and debate as an act of solidarity.
We need to guard against the creation of an artificial intelligentsia that discusses inclusion without including the other.
We need to guard against the creation of an artificial intelligentsia that discusses inclusion without including the other. And equally important, we ought to find ways for any public interest discussion to be transparent and accountable towards those we supposedly represent. The multi-stakeholder efforts that are shaping the discourse on AI are far from being sufficiently representative of global civil society or marginalized populations that are at greater risk.
There is no reason to doubt that what drives most of these efforts is a genuine concern for the public interest. However, while these coalitions have a necessary representation of academia, the private sector, and international NGOs (INGO), they lack the direct representation of civil society. While INGOs are an important voice, they are not a sufficient representation of the Global South. For each INGO there is a myriad of regional and national organizations that could help form a better bridge between marginalized populations and any world-changing development.
How to create greater inclusion?
The need for inclusion can be rooted in the right of all human beings to benefit from scientific developments as articulated in Article 15 of the International Covenant on Economic, Social, and Cultural Rights. It states that every individual has the right to enjoy the benefits of scientific progress and its applications. The crucial battles being fought to embed fairness and transparency in AI will have a better chance of success if fought from inclusive platforms for dialogue and debate. Decision making for policies around these issues can be done more effectively and humanely if actors involved in all walks of political life have the benefit of understanding and participating.
How do we promote and include the voices of marginalized populations in the debate on AI? How do we bring the seemingly disconnected Global South, or the marginalized communities in both hemispheres, to not only participate, but also shape the discussions around AI?
We need to build ramps and provide elevators for people of all backgrounds to enter and shape the conversation around a technology that will affect ours and coming generations.
Inclusion may not be easy to achieve in AI and may be perceived as an unnecessary deceleration by some, but inclusion is important and it has proven vital for societies. As a comparison, we can consider ramps in buildings. Increasingly, and after decades of work by disabilities advocates, we started to see ramps included in building design for citizens of all abilities to enter, use, and exit public and private infrastructure. Architects and building owners no longer challenge the need to build a costly ramp into a building that will serve only a few of us. We as a society increasingly see this as a reflection of our collective desire to open spaces of work and participation to all.
The time has come to do the same in AI. We need to build ramps and provide elevators for people of all backgrounds to enter and shape the conversation around a technology that will affect ours and coming generations. Marginalized people bring context-awareness, different experiences, problem-solving, and analysis to the table.
Perhaps most importantly, we need to challenge the status quo and demand that marginalized voices are front and center in the fight for non-discriminatory AI.
While there is no definitive roadmap, there are a few things we can do:
- Recognize, celebrate, and amplify existing inclusive practices. From open-source AI technology and events with ASL translation, to considering notions of inclusive intelligence and inclusion politics, there are a number of good practices to celebrate, support, and replicate.
- Transform the debate on AI into a domain of human inclusion and equality. As the Toronto Declaration states, “there are many discussions taking place now at supranational, state and regional level, in technology companies, at academic institutions, in civil society and beyond, focusing on how to make AI human-centric and the ‘ethics’ of artificial intelligence.” Building bridges between these events and marginalized communities on both hemispheres could produce a rich and transformative shift in the direction of the conversations.
- Promote the distribution and federation rather than the centralization of the debate. We need to be cautious of developing AI on a global scale because it can become exclusionary. Do we need to think of it homogeneously? Can we think of a future with many shapes and sizes of discussions around AI? Including local scale analysis may shed light on the debates that have taken place at the macro-level.
- Advance AI literacy. We need a new form of digital literacy, one that mixes data and privacy literacy with a basic understanding of AI. It is unlikely that we can future-proof all corners of society, but an inclusive dialogue will contribute to a better global fabric, especially when everyone has the opportunity to understand what’s at stake. We can achieve this by promoting general education about AI; demystifying concepts that have been impacted by years of anthropomorphic and dystopian visions of AI; and creating avenues to understand the exact use of AI in critical social systems.
- Break silos. Connect communities working on different issues and avoid creating silos. For example, there is a considerable distance between the human rights and machine learning communities at the moment. We also need to promote participation from non-technical civil society in discussions of the ethics and governance of AI. Perhaps most importantly, we need to challenge the status quo and demand that marginalized voices are front and center in the fight for non-discriminatory AI.
As with other aspects of social life, the world’s marginalized and vulnerable are at the highest risk of harm when it comes to AI. Finding reasonable strategies to include diverse voices and experiences of global, marginalized communities may be the most substantial challenge that we have yet to face. But it is an effective avenue to reduce future harms and increase our collective preparedness and resilience.