After decades of research and progress in the area of artificial intelligence (AI), we appeared to have reached a point in which AI is no longer confined to utopian or dystopian conversations about the future, but a present reality, impacting all industries, businesses, and aspects of life. Just like any past technological innovations, the widespread impact of AI has elicited much concern, resistance, and backlash, including alarmist accusations of algorithms as vessels for “coded bias”, “weapons of math destruction”, and “sexist and racist robots”.
But, could AI be an improbable weapon for improving Diversity, Equity, Inclusion, and Belonging (DEIB) initiatives? It is a question that matters more than HR practitioners tend to think, not least in light of the lackluster impact of typical DEIB interventions. Alas, HR seems far more fearful of AI than aware of its utility; DEIB is no exception.
As I illustrate in my latest book, I, Human: AI, Automation, and the Quest to Reclaim What Makes us Unique, there’s no question that AI will add value in two specific areas of DEIB. The first is to diagnose things better, telling us what truly goes on in a culture, revealing some of the hidden dynamics underlying many of the critical interactions between people at work, including the silent forces that determine why some people are more likely to get promoted than others, particularly when their performance isn’t the answer. For example, research shows that even in the absence of gender differences in everyday, granular work behaviors, men are significantly more likely to get promoted into management and leadership roles. If this cannot be attributed to more effective work behaviors or real performance differences, then the answer is bias.
The second is being able to actually measure inclusion, in particular whether someone’s demographic status or identity can predict their actual status at work. Diversity is easy to quantify, at least once organizations pick their target categories and goals (e.g., get more women, minorities, older workers, neurodiverse individuals, etc). But inclusion, which is about how people are really treated, is much harder to assess, let alone track.
Think of AI as a data mining tool that is the equivalent of an X ray for human interactions, and can tell us what goes on when people interact with each other, and how people are treated when they are part of vulnerable or underrepresented groups, especially compared to those who benefit from privilege. This is important because it allows us to go beyond perceptions and decode whether there are biases in behavior, which is really what we should be tackling.
Despite the popular appeal of “unconscious bias” interventions, it is time to accept that there is very little scientific support for the idea that toxic behavior and discrimination are the produced of unconscious or implicit attitudes, or that making people aware of their biases is a valid approach for creating fair or equitable work environments. In fact, it is not people’s thoughts we ought to monitor, for humans are biased by design, by their actions. For thousands of years we have been able to adhere to polite etiquette and act kindly towards our colleagues and neighbors, while we complain or bitch about them in private. This is not a bad measure of civilization: since we are not prewired to embrace or celebrate those who think or look differently from us, let us at least learn to work and live with them in harmony, which will require tolerance and rational compassion, especially when we are unable to naturally empathize with people (precisely because they appear too different from us).
Importantly, it is perfectly feasible to think of a world in which human biases and meritocracy may co-exist, and this scenario would represent substantial progress. In fact, since human biases are a given (unless we eliminate humans), the goal should be to leverage data and evidence to promote fairer evidence-based practices. Simply put, if you want to increase meritocracy you need to align people’s career success with their actual performance, defined as the value they add to a team or organization. Although the past 200-years have seen an unprecedented transition from more nepotistic to more meritocratic hiring, as embodied by the present intellectual capital age, there is still much room for improvement. Indeed, if you walk into any business and ask a random group of employees whether in their company those who are most successful (senior, better paid, etc) truly contribute the biggest value, they will probably laugh at you.
Politics and nepotism are still alive and kicking, and dramatically corrode efforts to create fair and effective organizations, which is why there is far less progress on DEIB than there should be. Here’s where AI will help: revealing (as well as exposing) the actual contribution people make to their teams and organizations, beyond perceptions and popular opinion, purifying our measurement of performance, and managing people based on their true output, which, incidentally, would put to the bed the tedious discussion about hybrid work and working from anywhere.
Consider how Uber uses AI to manage its army of drivers (around 3.5 million). Uber does not rely on human managers to decide whether a driver is better than others, which would surely unleash that manager’s preferences, biases, and subjective views, which are unreliable indicators of employees’ performance. Instead, its algorithms measure the driver’s number of trips, revenues, profits, accident claims, and passenger ratings. Granted, some drivers may be rated unfairly (too high or too low) because of factors unrelated to their actual performance, such as their gender, race, or social class, but in the grand scheme of things, the level of noise and bias will be marginal compared to the typical performance rating given to an employee by their boss.
To be sure, it is unlikely that AI (or any other invention) will ever fully eliminating bias, because humans are biased by design. In fact, we would probably not want to fully eradicate bias completely even if we could, because it would make us very boring, homogeneous, and robot like. For instance, much of the positive influence people have on each other when they work together is based on subjective or biased attitudes: “I work well with X because we have so much in common”, or “hiring leader X will energize people because she stands for their values and beliefs”.
However, if we are genuinely interested in creating more open and diverse societies, it is clearly useful to keep our biases in check. This starts by accepting that when we are free to follow our instincts or intuition, we are rarely as open minded as we like to think. Left to their own devices, managers would mostly hire people like them and promote them based on how similar their opinions are, which is the right recipe for creating a cult rather than a healthy culture. Likewise, without the tools and data to reveal how different people are treated at work, particularly when they are different, leaders will continue to perpetuate their self-serving delusion of having created an inclusive culture, an experienced shared only by those who continue to enjoy the nepotistic privileges of belonging to the in-group.
Original post: https://www.forbes.com/sites/tomaspremuzic/2023/02/07/how-artificial-intelligence-can-boost-diversity–inclusion/