Artificial intelligence is ubiquitous today. Most of us do not know where AI is being used and are unaware of the biased decisions that some of these algorithms produce. There are AI tools that claim to infer “criminality” from face images, race from facial expressions and emotion recognition through eye movements. Many of these technologies are increasingly used in applications that impact credit card checks, fraud detection, criminal justice decisions, hiring practices, healthcare outcomes, spreading misinformation, education, lifestyle decisions and more.
Addressing this issue with robust tools for evaluating transparency, bias and fairness, as well as the ethical evaluation of algorithms, is a good first step. Expanding the scope to include the teams and environments in which the products are built would enable building fair products that not only reduce unfair outcomes but also ensure that AI systems do not disadvantage some sectors of the population.
Who Shapes AI Today?
To address the complexities of who is shaping AI today, we need to understand who builds AI systems, and how race, gender and other protected classes are represented within AI products.
Studies show that only 12% of machine learning researchers are women, 15% of AI research staff at Facebook are women and just 10% at Google. According to one report (via Fortune), 2.5% of Google’s workforce is Black, whereas it is 4% at Microsoft and Facebook.
The diversity problem is not just about women, gender or race — it is most fundamentally about what AI products get built, who they are built for and who benefits from their development.
Current technical tools exist to identify bias in datasets, create transparent algorithms or increase interpretability of AI products through explainability. In addition, we need to look at increasing the diversity of AI engineers, stakeholders and decision-makers who can focus on negative social, economic, health or legal outcomes.
What can CEOs and their top management teams do to lead the way in building diverse AI teams and communities of practice? Among others, we see five essential steps:
1. Restructuring Talent Acquisition
If you are a startup, you have the advantage of building your diverse pipeline talent at the onset. Looking into hiring from historically Black colleges and universities (HBCU) and Hispanic-serving institutions (HSI) is a good first step. When startups fill in candidates from outside their network, they build a strong foundation of diverse leadership. For larger and older organizations, increasing the talent pool of candidates appearing for interviews, looking outside of their regular talent sourcing strategies and paying attention to job descriptions would go a long way to bring in diverse talent.
2. Sustainable Inclusivity
It is not only sufficient to look at talent pools and increase candidates; it is equally important for business leaders to create an inclusive culture and retain diverse talent. Data scientists working on AI products are part of larger engineering departments that traditionally are not diverse. Making diverse teams feel inclusive means having open communication, addressing micro-aggressions and establishing concrete feedback mechanisms.
An example could be identifying employee resource groups (informal communities for like-minded team members) and obtaining feedback about what changes the organization can make to build a sustainable inclusive culture. Asking for regular feedback via email or Slack about how employees feel about the organizational culture can help build a culture that embraces diversity on an ongoing basis.
3. Pay Parity
Fair compensation is crucial to retain data science talent, make a huge difference in AI research, build diverse AI products and shape the way AI impacts society.
Compensation policies around providing guidelines for negotiation, narrowing the comp bands and reviewing compensation decisions of hiring managers can go a long way toward attracting the right talent pool and retaining them. Tracking compensation patterns of underrepresented groups and ensuring there are minimal outliers can lead to less turnover.
4. Measure For Success
While D&I initiatives are complex, it is important to record and measure your experiments and initiatives. Questions such as “What outcomes are we trying to achieve?” and “What are the success indicators to measure the impact of the programs?” help to see if you are succeeding or failing to meet your goals. Well-defined goals such as more hires, greater participation in meetings and NPS scores would help with the measurement, accountability and effectiveness of your programs.
Diversity in data scientists and machine learning engineers is a must. However, leadership roles such as a CTO or head of product have the decision-making and funding authority when building an AI product. Hiring diverse people for these key positions is as important as hiring for technical roles.
5. Mentor Minorities
Equal access is the first step in spreading awareness about job opportunities and creating a pipeline of qualified candidates. Girls and women in AI programs, hackathons and STEM-related school programs would help diverse talent realize the potential of artificial intelligence and machine learning jobs. For example, someone who is intimidated by math and programming can understand the potential of jobs such as product managers, sales support and quality assurance roles — all of which are in the field of data science but do not necessarily require hands-on programming experience.
Transparency around talent pipelines, hiring practices and publishing compensation bands are table stakes D&I initiatives that can go a long way toward establishing diversity in the field of artificial intelligence. The transparency of AI products and where they are used would also help in assessing the wider social impacts of machine learning algorithms. Methods for testing and assessing bias would greatly improve once a diverse team of stakeholders, data scientists, data engineers and analysts are at the table.