It is no mystery that facial recognition technology integrated into public safety creates controversy. In July 2018, when the London Metropolitan Police (Met) conducted tests on the technology, the London Policing Ethics Jury reported that there was a lack of clarity about the legal basis for the use of technology and its regulation, and that the police should continue to work closely with relevant Commissioners to ensure proper supervision of their use.
Almost a year later, the technology – which verifies people passing by cameras in a public place – continues to face the same issue: privacy breach. As a result, 5 conditions for the use of face recognition were stipulated from tests to evaluate how this solution affects day-to-day policing, showing its benefits and defects.
Useful, but invasive
The independent jury undertook a survey of Londoners’ perceptions of facial recognition and how it would be used in public safety. The survey contains data from 10 tests carried out by the Metropolitan Police applying the technology throughout the city. More than 57% say that police use of facial recognition software was acceptable, and that number increases to about 83% when respondents were asked whether they supported using technology to search for offenders.
However, while half of those surveyed felt that using such software would make them feel more secure, more than a third of people raised concerns about the impact on their privacy. Dr. Suzanne Shale, director of the London Policing Ethics Jury, concludes on the findings of the survey, saying:
“Given the impact that digital technology can have on public confidence in the police, ensuring that the use of such software does not compromise that relationship is absolutely vital”
In this way, ensuring that confidence does not diminish is paramount for technology to continue to be used. After an extensive review of the use of the Police of this software, the Jury published a final report on June 4 that recommends that live facial recognition software should only be fully and officially implemented by the police if the five conditions below can be met:
- The general public safety benefits should be large enough to offset any potential public mistrust in technology;
- To demonstrate that the use of technology does not generate gender or racial bias in policing operations;
- Each deployment must be evaluated and authorized in advance, to ensure that it is necessary and proportional to a specific policing goal;
- Operators should be trained to understand the risks associated with using the software and understand that they are responsible;
- Both the Metropolitan Police and the City Police’s Crime and Policing Office should develop strict guidelines to ensure that implementations balance the benefits of this technology with the potential invasion of privacy.
In addition to the 5 conditions, the Jury also established a framework to support the police in testing new technologies. The framework is designed to address any ethical concerns about how the new technology will be used to ensure it is there to protect the public without affecting any privacy rights.
The structure consists of 14 questions about engagement, diversity, and inclusion that the Police must consider before proceeding with any technological test. In conclusion, the Jury director said that “there are important ethical issues to be addressed, but these do not represent real reasons for not using facial recognition.
We will be watching closely how the use of this technology progresses to ensure that it continues to be investigated ethically.