‘Technology that identifies people remotely can be used to criminalise them’

‘Technology that identifies people remotely can be used to criminalise them’

Editors choice
0

To get a wider sense of academic views on facial-recognition ethics, Nature this year surveyed 480 researchers who have published papers on facial recognition, artificial intelligence (AI) and computer science.
On some questions, respondents showed a clear preference.
When asked for their opinions on studies that apply facial-recognition methods to recognise or predict personal characteristics (such as gender, sexual identity, age or ethnicity) from appearance, around two-thirds said that such studies should be done only with the informed consent of those whose faces were used or after discussion with representatives of groups that might be affected.
But on other issues, academics were split. Around 40 per cent of the scientists in the survey felt that researchers should get informed consent from individuals before using their faces in a facial-recognition data set, but more than half felt that this wasn’t necessary.
The researchers’ dilemma is that it’s hard to see how they can train accurate facial-recognition algorithms without vast data sets of photos, says Sébastien Marcel. Marcel leads a biometrics group at the Idiap Research Institute in Martigny, Switzerland. He thinks that researchers should get informed consent — but in practice, they don’t.
His own group doesn’t crawl the web for images, but it does use online image data sets that others have compiled. “A lot of researchers don’t want to hear about this: they consider it not their problem,” he says.
Ed Gerstner, director of journal policy at Springer Nature, said the publisher was considering what it could do to discourage the “continued use” of image databases that don’t have explicit consent for their use in research from the people in the images.
Nature’s survey also asked researchers whether they felt that facial-recognition research on vulnerable populations — such as refugees or minority groups that were under heavy surveillance — could be ethically questionable, even if scientists had gained informed consent.
Overall, 71 per cent agreed; some noted it might be impossible to determine whether consent from vulnerable populations was informed, making it potentially valueless.
Some of those who disagreed, however, tried to draw a distinction between academic research and how facial recognition is used. The focus should be on condemning and restricting unethical applications of facial recognition, not on restricting research, they said.
Ethicists regard that distinction as naive. “That’s the ‘I’m just an engineer’ mentality — and we’re well past that now,” says Karen Levy, a sociologist at Cornell University in Ithaca, New York, who works on technology ethics.
In the past year, there has been growing scrutiny of universities’ partnerships with companies or research programmes linked to mass surveillance in Xinjiang.
The Massachusetts Institute of Technology (MIT) in Cambridge, for example, said it would review its relationship with the Hong Kong-based tech firm SenseTime after the US government — in the middle of a trade war with China — blacklisted the firm and other Chinese AI companies, such as Megvii in Beijing, over their alleged contributions to human-rights violations in Xinjiang.
In 2018, SenseTime and MIT announced they had formed an “alliance on artificial intelligence”; MIT says that SenseTime had provided an undisclosed sum to the university without any restrictions on how it would be used and that the university will not give it back.
Both Megvii and SenseTime contest the US blacklisting. SenseTime says its technology has “never been applied for any unethical purposes”, and Megvii says it requires its clients “not to weaponise our technology or solutions or use them for illegal purposes”.
Academic conferences have been contentious, too. The Chinese Conference on Biometrics Recognition (CCBR) was held in Xinjiang’s capital, Ürümqi, in 2018.
Anil Jain, a computer scientist at Michigan State University in East Lansing, sat on the conference’s advisory board and travelled there to give a speech.
Some AI researchers, including Toby Walsh at the University of New South Wales in Sydney, Australia, later criticised Jain for this in stories reported by the New York City-based Coda magazine.
Coda magazine also noted that Springer Nature sponsored the conference; the company said its role was limited to publishing CCBR proceedings and that it had strengthened its requirements for conference organisers to comply with the publisher’s editorial policies after concerns were raised about past content.
And Jain challenged the critique, telling Nature that attending conferences in China “does not mean that … international conference participants, like me, condone these atrocities against minorities”. Growth in surveillance there shouldn’t be a reason to “curtail scientific exchange”, he said.
Jain remains on the advisory board for CCBR 2020–21; Springer Nature is still publishing the conference abstracts. And major international computer-vision conferences have continued to accept sponsorship from Chinese firms.
Just after the blacklisting, SenseTime and Megvii sponsored the 2019 International Conference on Computer Vision and Megvii sponsored the 2020 Conference on Computer Vision and Pattern Recognition, although its logo was removed from the conference’s website after the meeting occurred.
“Conferences should avoid sponsors who are accused of enabling abuses of human rights,” reiterates Walsh. However, he notes that last year, the non-governmental organisation Human Rights Watch in New York City withdrew initial allegations that Megvii facial-recognition technology was involved in an app used in Xinjiang. Conference organizers did not respond to a request for comment.
Questionable research projects have popped up in the United States, too. On May 5, Harrisburg University in Pennsylvania posted a press release declaring that researchers there had developed facial-recognition software “capable of predicting whether someone is likely going to be a criminal”, with “80 per cent accuracy and no racial bias”.
The announcement triggered a wave of criticism, as had previous studies that hark back to the discredited work of nineteenth-century physiognomists. One notorious 2016 study reported that a machine-learning algorithm could spot the difference between images of non-criminals and those of convicted criminals that were supplied by a Chinese police department.
Harrisburg University deleted its press release on May 6 following the outcry, but left a dangling question: the press release had said that the work was to be published by Springer Nature in a book series (which the publisher later denied).
On June 22, more than 2,400 academics signed a letter from a group called the Coalition for Critical Technology (CCT), asking Springer Nature not to publish the work and calling on all publishers to refrain from publishing similar studies.
The letter pointed out that such studies are based on unsound science. It also noted that algorithmic tools that tell police where or who to target tend to provide a scientific veneer for automated methods that only exacerbate existing biases in the criminal justice system.
Three days earlier, more than 1,400 American mathematicians had written a letter asking their colleagues to stop collaborating with police on algorithms that claim to help reduce crime, because of concerns about systemic racism in US law-enforcement agencies.
Springer Nature was already under fire for a different paper, published in January in the Journal of Big Data, on detecting ‘criminal tendency’ in photos of criminals and non-criminals. After researchers from the IEEE got in touch with ethical concerns, Margeret Hall, the paper’s co-author at the University of Nebraska Omaha, asked in June for the paper to be withdrawn.
Hall says the now-retracted paper was “indefensible”. Springer Nature says the journal reviewed its processes and now requires authors to include statements on ethics approvals and consent when submitting manuscripts.
Some of the respondents in China said that they were offended by the question. “You should not say that in Xinjiang some groups are detained in camps,” wrote one. Just under half of the 47 Chinese respondents felt that studies on vulnerable groups could be ethically questionable even if scientists had gained consent, a lower proportion than respondents from the United States and Europe (both above 73 per cent).
One Chinese-American AI researcher who didn’t want to be named said that a problem was a cultural split in the field. “The number of Chinese researchers at top conferences who actively support censorship and Xinjiang concentration camp[s] concerns me greatly. These groups have minimal contact with uncensored media and tend to avoid contact with those who don’t speak Mandarin, especially about social issues like this. I believe we need to find ways to actively engage with this community,” they wrote.
Nature asked researchers what the scientific community should do about ethically questionable studies. The most popular answer was that during peer review, authors of facial-recognition papers should be asked explicitly about the ethics of their studies.
The survey also asked whether research that uses facial-recognition software should require prior approval from ethics bodies, such as IRBs, that consider research with human subjects. Almost half felt it should, and another quarter said it depended on the research.
Researchers who work on technology that recognises or analyses faces point out that it has many uses, such as to find lost children, track criminals, access smartphones and cash machines more conveniently, help robots to interact with humans by recognising their identities and emotions and, in some medical studies, to help diagnose or remotely track consenting participants. “There are a number of lawful and legitimate applications of face and biometric recognition which we need in our society,” says Jain.
But researchers must also recognise that a technology that can remotely identify or classify people without their knowledge is fundamentally dangerous — and should try to resist it being used to control or criminalize people, say some scientists.

  • A Nature magazine report
About author

Your email address will not be published. Required fields are marked *