AI can help trace language to violence

‘We fight like hell’. ‘Rally the troops’. ‘Join the fight against corruption’. ‘A war on poverty’ ‘Battle the alien intruder (Covid-19)’. Every day, militaristic and violent metaphors are used by journalists and political actors alike to communicate and mobilize action. These word choices may seem effective yet, these metaphors, imbued with violent imagery, can be dangerous.

From a policy standpoint, they are also ineffective (and potentially harmful). One example is how the global “war on drugs” terminology victimized, stigmatized, and misplaced blame. As noted by others, as with any war, there are always civil rights abuses. The ‘war on drugs’ is considered an epic failure as the repressive strategies focused on criminalization have led to the arrest and incarceration of tens of millions and “filled prisons and destroyed lives and families without reducing the availability of illicit drugs or the power of criminal organizations” according to The Leadership Conference on Civil and Human Rights.

A more co-operative approach, such as Australia and Singapore’s campaigns to reduce water consumption, involving ‘Water Wally’ and ‘Water Sally’, may be a better foundation for influencing behaviors. Evidence suggests results from using fear, for instance, in road safety campaigns is at best uneven. By comparison, a campaign in Uganda to give women confidence to report household violence resulted in reductions in violence against women. Protecting voice may be more powerful than inciting fear.

Today, big data enables us to explore the role of language in shaping perceptions associated with violent behavior.  The need for stronger evidence of the relationship between language and violence has long been sought even prior to a year of violent conflict throughout the world fueled by inflammatory rhetoric. The UN and World Bank’s flagship report on conflict prevention, Pathways for Peace, and other studies, show that perceptions of exclusion may be more consequential for conflict risk than exclusion’s existence. Words matter.

Traditionally, much of the scholarly work on the drivers of violent conflict focus on macro-level factors such as group-specific grievances related to access to power, justice, security, services, land, and resources. Recent work identifies these factors of heightened risk during shocks such as a natural disasters or economic turbulence.

We know less about the role played by influential actors in mobilizing people towards or away from violence during such episodes. But new artificial intelligence methods that leverage large unstructured language datasets and convert them to structured data are helping relate specific language to outcomes. These measures show that the relationship between influential actors’ language and violence is significant. In a Kenyan study published by the International Growth Centre and the LSE-Oxford Commission on State Fragility, Growth and Development, AI was able to forecast (with 85% accuracy) change in levels of violence up to 120 days into the future.

Violent language may not just be to convey an idea through metaphor; it may be by design. To some, a policy argument may actually intend to injure the other party, psychologically if not physically. To others, fighting to win an argument may be purely linguistic. More judicious use of language, and ensuing accountability for one’s choices, makes sense, particularly when the stakes are high. It is precisely the subjectivity of how we interpret others’ language, without knowing (for certain) their intent, that creates an evidential challenge.

Defining and regulating hate speech or incitement is challenging. The UN’s Strategy and Plan of Action on Hate Speech, identifies any kind of communication in speech, writing or behavior that attacks or uses pejorative or discriminatory language with reference to a person or group on the basis of their identity. Multilateral efforts to implement responding policy at the national and global levels have been hampered by the sensitivity of the subject matter. To determine incitement, the UN’s Plan of Action requires (subjective) consideration of context, a speaker’s societal status, their intent, language’s content and form, its reach, and the eminence of violence.

Words can and do take a life of their own. In Alice Through the Looking Glass, Humpty Dumpty remarked that when he used a word, it meant just what he meant it to mean – neither more nor less. But it is a case of seller beware. Once spoken or in print, words no longer remain the sole proprietary of their originator. In public space, they will be interpreted by others, recast and reframed. Violent images in words can beget violent acts in reality, whether or not it was the intention of the speaker.

The space for injudicious usage is shrinking. Now that we have better methods for unlocking the link between language and acts of violence, it makes sense to think carefully in choosing our words. 

The recently launched World Development Report 2021: Data for Better Lives highlights that AI and machine learning are already used in development, in areas such as crop production, financial services, business innovations, and pandemic response. For example, scoring algorithms are used to offer microloans to first-time borrowers and customers without bank accounts in Africa (Kenya and Nigeria), India, and Mexico. Algorithms are also used to extract geolocated disaster-related language from Tweets to identify hotspots.

Looking ahead, machine learning may be employed to build ex-ante resilience for crisis response operations by accounting for broader aspects of vulnerability, including likelihood of violence. AI can, therefore, advance accountability for harmful language as well as evidence-based prevention. Used for this purpose, AI can assist development and help protect a fundamental human right.

 

Original post: https://blogs.worldbank.org/governance/ai-can-help-trace-language-violence

Leave a Reply

Your email address will not be published. Required fields are marked *