The third episode of this season of the Declarations Podcast delves into the topic of live facial recognition. Host Maryam Tanwir and panelist Veronica-Nicolle Hera sat down with Daragh Murray and Pete Fussey, who co-authored the Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology in July 2019. Live facial recognition has been a widely debated topic in the past years, both in the UK and internationally. While several campaigning organisations advocate against the use of this technology based on the Prohibition of Discrimination set out in human rights law, independent academic research on the topic reveals important insights into trials of this technology. Our guests are at the forefront of this research, and present some of their findings in this episode.

We kick off the episode with definitional issues. What is facial recognition technology? Pete explains that, when we speak of “facial recognition”, we are in fact referring to several technologies: one-to-one technology (such as that used to unlock smartphones or to clear passport gates in airports) that do not need databases, and one-to-many systems like live facial recognition (LFR) that compares images of passers-by against databases. 

“[Live Facial Recognition] is widely seen to be more intrusive than other forms of surveillance.”

Pete Fussey

Some of this live facial recognition is retrospective, as in the US, and some takes place constantly in real-time (as in the CCTV used in Chinese cities). These different types of facial recognition, both guests emphasize, have different implications. One-to-many live facial recognition systems are seen as more “intrusive” and “intimate”, their impact magnified by their ability to process images for several thousand people in a day. 

[Live Facial Recognition] has the potential to be an extremely invasive tool and… to really change the balance of power between state and citizens.

Daragh Murray

Comparing LFR with past surveillance technologies reveals an exponential increase in state surveillance capacities. The East German “Stasi”, for instance – widely considered one of the most sophisticated surveillance apparatuses – had access to only a fraction of the information that can be collected today. That’s why consideration of the human rights impact of these technologies is essential.

As Pete notes, we have often begun to review new technology years after it is initially deployed, and tend to look at a single aspect rather than examining a technology more broadly. For instance, Pete highlights how we look at technologies with a focus on authorisation decisions, even though their uses are likely to change over time. Potential future uses, therefore, need to be factored into our analysis.

There needs to be proper thinking about the whole life-cycle of these technologies.

Pete Fussey

Veronica then asked our guests to discuss their recent research on the Metropolitan police’s practices. The Met has been trialing LFR since 2016, when it was first deployed during the Notting Hill carnival. Pete highlights how important it is to see technology into context: it is hard to anticipate the direction a certain technology will take, and it is only through use that we can see its nuances.

The report they co-authored blended sociology and human rights law. From a sociological perspective, their main finding is that the human adjudication that was considered essential to the technology’s compliance with domestic and international law was close to non-existent. As Pete told us, “There is adjudication, but it’s not meaningful”.

Regarding human rights, Daragh outlines the 3-part test used to evaluate a potential inconsistency with human rights law. From the human rights perspective, new technology ought to (1) comply with local law, (2) have a legitimate aim, and (3) be necessary in a democratic society. Human rights law is about protection from arbitrary intervention by the state.

There is adjudication, but it’s not meaningful.”

Pete Fussey

Regarding human rights, Daragh outlines the 3-part test used to evaluate a potential inconsistency with human rights law. From the human rights perspective, new technology ought to (1) comply with local law, (2) have a legitimate aim, and (3) be necessary in a democratic society. Human rights law is about protection from arbitrary intervention by the state.

From this perspective, the report’s main finding hinged on compliance with law. This is delicate, as there is no law directly regulating LFR; the only law that exists stems from common law, which stipulated that the police ought to “undertake activities necessary to protect the public.” How can this ensure that LFR is not deployed for arbitrary use? The report concluded that the Metropolitan Police’s deployment of LFR was most likely unlawful, as it probably does not comply with the UK’s Human Rights Act. Indeed, a court in South Wales found that a similar deployment by the South Wales police was unlawful. 

We concluded that it was unlikely the Met’s deployment of LFR would be lawful.

Daragh Murray

As for the third important test – necessity in a democracy – there are conflicting norms: protection vs. privacy. In short, you have to demonstrate the technologies’ utility against potential harm. In this circumstance, this would involve showing how LFR could be used to prevent crime.

There was also a lack of pre-deployment assessment. For instance, the widely-accepted fact that LFR technology has significant biases was never assessed. Pete highlights how the introduction of new technology is often smoothed through softened terminology: “it’s a trial,” for instance. The Met’s use of LFR, however, was a deployment rather than a trial.

So, how should LFR be used in the future, if it should be used at all? From a human rights approach, Daragh thinks what is most important is to consider every deployment on a case-by-case basis, and to recognize the difference between different technologies. He notes the difference between using LFR at borders against a narrow database of people who are known threats and deploying it at a protest. The latter is likely to have a chilling effect on the right to protest and a “corroding effect on democracy”. The most problematic deployment, of course, is the use of LFR via an entire CCTV network.

The risk is that we sleepwalk in a very different type of society.

Daragh Murray

Pete highlights how thinking about LFR technology from the perspective of data protection is too restrictive. Terrorism or child abuse are often invoked to justify deployment of this technology, but this does not fit with what our guests saw.

Both our guests argue that the biases built into the technology make its use fundamentally problematic, whatever the circumstances. As Pete says, it is a scientific fact that algorithms and LFR technology have several biases: gender, age, race. Knowing that, how can we introduce such technology in the public? 

Daragh also points to the impact these technologies have on the relationship between citizens and the police. Previously, the police might have used community-based policing to work with areas affected by crime and address problems as they arose. With mass surveillance, however, you monitor entire populations and perform checks based on reports provided by the surveillance.  

“We simply have no oversight or regulation of it. None.”

Pete Fussey

So, what is the public’s role in deciding whether LFR tech should be used? 

Pete argues that it is not so much about a majority of the public approving a technology, but rather its impact on the most vulnerable segments of the population. Indeed, it is challenging to form an opinion on these topics, given their highly technical nature. If a technology is sold as working perfectly, this influences the opinion we have of it. Daragh adds that prior community engagement is key: in Notting Hill, for instance, nothing was done to explain why the technology was being used.

“We overstate public opinion as a justification for the use of these technologies.”

Pete Fussey

Finally, we asked our guests whether LFR could be deployed in compliance with human rights? Daragh thinks that they could be, but only in very narrow cases – airports, for example. However, we do not have sufficient knowledge of the technology to give it a green light. Across a city or in specific public locations he doubts it can ever be compliant with human rights. The chilling effect is key: how will this tech allow people to grow freely, maybe outside the norms? Anything that has the potential to interfere with our freedom to build our own lives should not be implemented.

Nevertheless, Pete thinks that the technology will be used, regardless of how problematic it is, and recommends that implementation is at least temporarily paused so that we can try to implement as many safeguards as possible before a further roll-out.

Our Panelist:

Veronica is an MPhil student reading Politics and International Studies at the University of Cambridge. Her research is focused on public perceptions of trust in government across democracies and authoritarian regimes. She is originally from Romania but has completed her undergraduate degree at University College London. Her interest in human rights issues and technology stems from her work with the3million, the largest campaign organisation advocating for the rights of EU citizens in the UK.

Our guests:

Pete Fussey is professor of sociology at the University of Essex. Professor Fussey’s research focuses on surveillance, human rights and technology, digital sociology, algorithmic justice, intelligence oversight, technology and policing, and urban studies. He is a director of the Centre for Research into Information, Surveillance and Privacy (CRISP) – a collaboration between surveillance researchers at the universities of St Andrews, Edinburgh, Stirling and Essex – and research director for the ESRC Human Rights, Big Data and Technology project (www.hrbdt.ac.uk). As part of this project Professor Fussey leads research teams empirically analysing digital security strategies in the US, UK, Brazil, India and Germany.

Other work has focused on urban resilience and, separately, organised crime in the EU with particular reference to the trafficking of children for criminal exploitation (authored Child Trafficking in the EU: Policing and protecting Europe’s most vulnerable (Routledge) in 2017). Further books include Securing and Sustaining the Olympic City (Ashgate) Terrorism and the Olympics (Routledge), and a co-authored book on social science research methodology, Researching Crime: Researching Crime Approaches, Methods and Application (Palgrave). He has also co-authored one of the UK’s best selling criminology textbooks (Criminology: A Sociological Introduction) with colleagues from the University of Essex. He is currently contracted by Oxford University Press to author a book entitled “Policing and human rights in the age of AI” (due Spring 2022). 

Daragh Murray is a Senior Lecturer at the Human Rights Centre & School of Law, who specialises in international human rights law and the law of armed conflict. He has a particular interest in the use of artificial intelligence and other advanced technologies, particularly in an intelligence agency and law enforcement context. He has been awarded a UKRI Future Leaders Fellowship to examine the impact of artificial intelligence on individual development and the functioning of democratic societies. This 4 year project began in January 2020 and has a particular emphasis on law enforcement, intelligence agency, and military AI applications. Previous research examined the relationship between human rights law and the law of armed conflict and the regulation and engagement of non-State armed groups.

Daragh is currently a member of the Human Rights Big Data & Technology Project, based at the University of Essex Human Rights Centre, and the Open Source for Rights Project, based at the University of Swansea. He also teaches on the Peace Support Operations course run by the International Institute of Humanitarian Law in Sanremo.

He is on the Fight For Humanity Advisory Council, a NGO focused on promoting human rights compliance among armed groups. Daragh has previously worked as head of the International Unit at the Palestinian Centre for Human Rights, based in the Gaza Strip. In 2011, he served as Rapporteur for an Independent Civil Society Fact-Finding Mission to Libya, which visited western Libya in November 2011 in the immediate aftermath of the revolution.

Further reading

Pete Fussey and Daragh Murray (2019) ‘Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology’

Pete Fussey and Daragh Murray (2020) ‘Policing Uses of Live Facial Recognition in the United Kingdom’ in Amba Kak, ed., Regulating Biometrics: Global Approaches and Urgent Questions (published by the AI Now Institute)

Davide Castelvecchi (2020) ‘Is facial recognition too biased to be let loose?’ (featured in Nature)