In the summer of 2020, as the murder of George Floyd sparked a judgment on racial injustice, IBM, Microsoft and Amazon stopped selling facial recognition technology to law enforcement. The moratoriums acknowledged what critics of the technology have long argued: that facial recognition is dangerously unreliable, leads to oversurveillance of marginalized people, and lacks proper legal protections.
More than 20 US states, including California, the birthplace of the tech industry, have taken steps to restrict the use of facial recognition. But as crime rates in several major US cities rise, those restrictions are gradually being eroded. In the UK, meanwhile, the application of facial recognition in policing has met with very little resistance.
After a judge ruled South Wales Police breached privacy and equality laws by using the technology, the force said it would prosecute the technology regardless. “It’s a judgment we can work with,” the force said, before making minor adjustments to how it was used. London’s Metropolitan Police also appear unfazed by the judgement, having deployed live facial recognition several times.
But while legal challenges have failed to curb police forces’ appetite for facial recognition, concerns about the technology remain. This week the Ada Lovelace Institute – a research body funded by the Nuffield Foundation – became the latest UK organization to call for an end to the use of live facial recognition.
Matthew Ryder, a top criminal lawyer, was commissioned by the institute to produce one of the most comprehensive analyzes of biometric surveillance legislation to date. It warns that “the current legal framework is not fit for purpose, has not kept pace with technological advancements, and does not specify when and how biometrics can be used, or the processes to follow.”
Perhaps the most important concern is that technology exacerbates discriminatory policing. This usually happens in two ways. First, several studies have shown that live facial recognition services more consistently misidentify people of color, leading to wrongful arrests. An independent University of Essex review of the Met’s trials found it accurately identified individuals in less than one in five cases.
Second, if black people are already disproportionately targeted by police forces for committing petty crimes, they will be overrepresented in mugshot databases. This creates a feedback loop in which previous episodes of unfair targeting lead to discriminatory policing in the future.
Content from our partners
Ryder acknowledged to me that “there may be progress in terms of the nature of the biases in products that perform facial recognition, or attempts to correct some of the systemic issues in the application of the technology.” But he said “those issues have not been resolved”.
The call for the moratorium is ‘by no means a drastic step’, Ryder continued, as it ‘comes in light of the Court of Appeal’s decision’ regarding the use of South Country police. of Wales and “the way the companies themselves have expressed their concerns about the products they have made”.
Ryder called for the creation of a “technology-neutral regulatory framework” that outlines the considerations public and private bodies must take before using biometric technology. Many legal and policy experts have already called for a framework like this, but Ryder’s review goes deeper.
Legislation should, he argued, not only cover the use of biometrics for identification – as existing privacy laws, such as the General Data Protection Regulation, already do (GDPR), but also for classification. In doing so, the proposals would bring emotion recognition technology within the scope of the law for the first time. This field is expanding and is expected to grow rapidly in the coming years. Recruiters are already using technology to rank interview candidates based on their facial expressions, while schools are buying software to assess whether children are paying attention during lessons.
The other key proposal is the creation of a biometrics ethics committee. Similar ideas already exist at regional level in London and the West Midlands, but this would involve a national committee overseeing every police use of facial recognition in the country. The council would play an advisory role and its decisions would not be binding, but police forces that deviate from its guidelines would have to explain publicly why they do so.
The proposals will likely be welcomed by civil liberties campaigners, who have long warned that the patchwork of laws covering facial recognition is inadequate. But the ministers could be more resistant to them. The government has indicated that it wants to “simplify” the legal framework that underpins biometric data in law enforcement, while adopting a leaner and innovation-friendly approach to data governance after the Brexit.
Ryder’s proposals, on the other hand, not only go beyond the GDPR, but are also stronger than the first drafts of the EU’s next artificial intelligence law. This latest legislation will treat emotion recognition systems as “limited risk”, which means, in most cases, the only responsibility of users of facial technology will be to inform the public that they are being monitored (unless they are only used in high-risk situations such as law enforcement).
Ryder, however, said it would be wrong to view the report as anti-innovation. “Clear regulation gives those who try to innovate confidence in the four corners in which they can innovate,” he confirmed. “Obviously it’s a discussion, but we believe the trope that regulation necessarily stifles innovation is simply not borne out by practice.”
[See also: Smartphone use is having a radical effect on young people’s mental health]