“This new algorithm will need a lot of pictures of people. What if we use a morgue so we don’t have to worry about consent?” Although this is a fictitious example, modern-day tech workers often face similar questions.
Why? Because the rise of artificial intelligence based on machine learning has created a new class of sociotechnical challenges. Now is the time for industry and universities to acknowledge these new challenges and step up to meet them.
Since the beginning of the technology industry, educational institutions, legislatures, companies, and developers have worked to improve the quality of products and services. The resulting curricula, laws, corporate policies, standards, and development approaches have provided frameworks for engineers and product managers. Emerging technologies require the development of new frameworks.
In the early 2000s, industry had to get serious about computer security. Today, we have a new challenge: How do you turn the goal of responsible AI into code?
Specialized groups at Microsoft focus on translating ethics policy, research, and customer needs into actionable information for product teams across the company. This approach makes responsible AI real for employees and democratizes the capability of implementing responsible AI across every product.
The concept is not novel. It adopts horizontal efforts used throughout the software industry to achieve the goals of security, privacy, and accessibility. Addressing these challenges is hard. It requires dedication, a long-term focus, and the willingness to include intangibles when computing return on investment.
Addressing the range of challenges facing contemporary companies also requires an expansion of the talent profile of employees. Specifically, we need to ensure that those entering careers in technology have studied how technology can impact society.
Higher educational institutions should take this responsibility seriously. For example, a new initiative in ethics and transformative technologies at Seattle University, supported by Microsoft, is stimulating the development of new undergraduate courses in ethics and technology.
To implement responsible AI, new initiatives should focus on three contemporary shortcomings:
- Developers must learn how to apply responsible AI principles. For example, new approaches are needed so that data scientists can explain how their AI algorithms arrive at their decisions. Product managers should be able to raise concerns in product reviews early in the development cycle when it is still possible to make fundamental changes to the design. Access to information should be democratized with self-serve educational material.
- We must engage more disciplines to deal with the complexity of modern systems and meet customer expectations. Social scientists, lawyers, and domain experts are needed to help answer nuanced questions like, “Should we deploy this facial recognition system?” or “Should we sell this system to the military?”
- We must increase the diversity of our workforce. Conscious and unconscious biases based on the developers’ backgrounds and experiences have resulted in the development of technologies that do not serve everyone. A more diverse workforce will allow multiple perspectives to be brought to every issue, helping ensure that the resulting technology addresses the needs of all customers. Educational institutions must work even harder to diversify the student in their tech majors. Industry must take steps to become more inclusive. In today’s technology industry, too many members of underrepresented groups encounter unequal work environments.
The development of new AI applications has the potential to bring enormous improvements to every sector of our society. However, as AI plays an ever-larger role in our society, the harmful impacts of poorly implemented systems can increase.
Corporations and educational institutions can work together to help ensure that product development teams are staffed with diverse employees from a variety of disciplines who are capable of judging the social impacts of new technologies and can help ensure they are deployed in a responsible manner.