
As facial-recognition systems grow more common in society, researchers and activists have pushed back against their spread. They’re worried by the technology’s inaccuracies and how it is used, and they have concerns about the ethics of some research in this area.
This year, Nature collected the views of 480 researchers across the world who work in facial recognition, computer vision and artificial intelligence (AI), on some thorny ethical questions about facial-recognition research. The results of this survey suggest that some scientists are concerned about the ethics of work in this field — but others still don’t see academic studies as problematic.
Nature conducted the survey in February and March, inviting researchers by e-mail. (Because many researchers did not take the survey, the poll is a convenience sample that is not necessarily representative of scientists’ views as a whole.) More than two-thirds of the respondents were from Europe and North America, with another 10% from China. Two-thirds worked at universities, and some 15% said they were at private companies. The full survey and results are laid out below; an accompanying feature contains more discussion of ethical concerns in this field.
Many researchers are worried about facial-recognition studies on vulnerable populations. Nature asked scientists their opinion on this, citing as an example studies on the predominantly Muslim Uyghur population in Xinjiang. Because many members of this group have been detained in camps, researchers have questions whether they can give free informed consent to research about them, and whether such studies benefit them. Some 71% of respondents said these types of study are — or might be — ethically questionable even if the scientists who did them reported gaining informed consent. Respondents in China were nearly evenly split between those who felt this type of research was acceptable with informed consent and those who had concerns.
Researchers who worried about the ethics of studying vulnerable groups supported a wide variety of actions that the academic community should take. Many thought that journals and conferences should strengthen the peer-review process to prevent the dissemination of ethically troubling research.
Some studies use facial-analysis methods to examine sensitive personal characteristics , such as recognizing or predicting gender, sexual identity, age, or ethnicity from appearance. Most researchers felt that investigators should take additional steps in these kinds of study. “Whether or not the studies should be done at all depends heavily on the context: why is this important? Who might benefit? At the very least, such studies should be subject to stringent ethical approval, where the approvals process includes the voices of the potentially affected groups,” one researcher wrote.
The vast majority of research in facial recognition uses large data sets of people’s faces, downloaded from public photo collections online, to train and test facial-recognition algorithms. Because such photos are public and often licensed (by the photographer) for others to use, researchers do not usually contact the people pictured to ask permission. But when asked, a large fraction of respondents (193 of 480) said that researchers should get specific informed consent from the people in the pictures — although more thought that this wasn’t necessary, as long as photos were freely available or had public-use licenses.
One scientist added: “There needs to be far greater public understanding of the licensing of images and data. At present, licence agreements are not written with the user’s understanding and empowerment in mind. They are written to enable the researcher to do what they want to do and have sufficient legal protection. We need a shift in how we get informed consent.”
Most of the respondents thought that research that uses facial-recognition software should require prior approval from ethics bodies that consider research on human subjects, such as an institutional review board (IRB). But most had questions about whether IRBs were equipped to perform such reviews.
A majority of the survey respondents wanted more training in the ethics of facial-recognition research, even if they already had some training.
Looking beyond research, the respondents to Nature’s survey had concerns about the use of facial-recognition technology in society, especially by private companies and the government.
Respondents were most uncomfortable about the use of facial recognition for live surveillance, in schools, at workplaces, or by private companies to monitor public spaces. But they generally supported it for use by police in a criminal investigation. The opinions are quite similar to the findings of a September 2019 survey of the views of 4,000 British adults, conducted by the London-based Ada Lovelace Institute, a charity-funded research institute. This study, too, found majority support for facial recognition in policing, to unlock smartphones, or in airports to check travellers’ identities— and almost no support for its use in schools or at work.
Respondents felt strongly that there should be additional regulations governing the use of facial-recognition technology by public bodies. More than 40% wanted to ban real-time mass surveillance.
And most also wanted additional regulations regarding the use of facial-recognition technology by companies.