The third episode of this season of the Declarations Podcast delves into the topic of live facial recognition. Host Maryam Tanwir and panelist Veronica-Nicolle Hera sat down with Daragh Murray and Pete Fussey, who co-authored the “Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology” in July 2019. Live facial recognition has been a widely debated topic in past years, both in the UK and internationally. While several campaigning organisations advocate against the use of this technology based on the Prohibition of Discrimination set out in the human rights law, independent academic research on the topic reveals important insights into trials of this technology. Our guests are at the forefront of this research, and present some of their findings in this episode.
The third episode of this season of the Declarations Podcast delves into the topic of live facial recognition. Host Maryam Tanwir and panelist Veronica-Nicolle Hera sat down with Daragh Murray and Pete Fussey, who co-authored the Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology in July 2019. Live facial recognition has been a widely debated topic in the past years, both in the UK and internationally. While several campaigning organisations advocate against the use of this technology based on the Prohibition of Discrimination set out in human rights law, independent academic research on the topic reveals important insights into trials of this technology. Our guests are at the forefront of this research, and present some of their findings in this episode.
We kick off the episode with definitional issues. What is facial recognition technology? Pete explains that, when we speak of “facial recognition”, we are in fact referring to several technologies: one-to-one technology (such as that used to unlock smartphones or to clear passport gates in airports) that do not need databases, and one-to-many systems like live facial recognition (LFR) that compares images of passers-by against databases.
“[Live Facial Recognition] is widely seen to be more intrusive than other forms of surveillance.”Pete Fussey
Some of this live facial recognition is retrospective, as in the US, and some takes place constantly in real-time (as in the CCTV used in Chinese cities). These different types of facial recognition, both guests emphasize, have different implications. One-to-many live facial recognition systems are seen as more “intrusive” and “intimate”, their impact magnified by their ability to process images for several thousand people in a day.
“[Live Facial Recognition] has the potential to be an extremely invasive tool and… to really change the balance of power between state and citizens.”Daragh Murray
Comparing LFR with past surveillance technologies reveals an exponential increase in state surveillance capacities. The East German “Stasi”, for instance – widely considered one of the most sophisticated surveillance apparatuses – had access to only a fraction of the information that can be collected today. That’s why consideration of the human rights impact of these technologies is essential.
As Pete notes, we have often begun to review new technology years after it is initially deployed, and tend to look at a single aspect rather than examining a technology more broadly. For instance, Pete highlights how we look at technologies with a focus on authorisation decisions, even though their uses are likely to change over time. Potential future uses, therefore, need to be factored into our analysis.
“There needs to be proper thinking about the whole life-cycle of these technologies.”Pete Fussey
Veronica then asked our guests to discuss their recent research on the Metropolitan police’s practices. The Met has been trialing LFR since 2016, when it was first deployed during the Notting Hill carnival. Pete highlights how important it is to see technology into context: it is hard to anticipate the direction a certain technology will take, and it is only through use that we can see its nuances.
The report they co-authored blended sociology and human rights law. From a sociological perspective, their main finding is that the human adjudication that was considered essential to the technology’s compliance with domestic and international law was close to non-existent. As Pete told us, “There is adjudication, but it’s not meaningful”.
Regarding human rights, Daragh outlines the 3-part test used to evaluate a potential inconsistency with human rights law. From the human rights perspective, new technology ought to (1) comply with local law, (2) have a legitimate aim, and (3) be necessary in a democratic society. Human rights law is about protection from arbitrary intervention by the state.
“There is adjudication, but it’s not meaningful.”Pete Fussey
Regarding human rights, Daragh outlines the 3-part test used to evaluate a potential inconsistency with human rights law. From the human rights perspective, new technology ought to (1) comply with local law, (2) have a legitimate aim, and (3) be necessary in a democratic society. Human rights law is about protection from arbitrary intervention by the state.
From this perspective, the report’s main finding hinged on compliance with law. This is delicate, as there is no law directly regulating LFR; the only law that exists stems from common law, which stipulated that the police ought to “undertake activities necessary to protect the public.” How can this ensure that LFR is not deployed for arbitrary use? The report concluded that the Metropolitan Police’s deployment of LFR was most likely unlawful, as it probably does not comply with the UK’s Human Rights Act. Indeed, a court in South Wales found that a similar deployment by the South Wales police was unlawful.
“We concluded that it was unlikely the Met’s deployment of LFR would be lawful.“Daragh Murray
As for the third important test – necessity in a democracy – there are conflicting norms: protection vs. privacy. In short, you have to demonstrate the technologies’ utility against potential harm. In this circumstance, this would involve showing how LFR could be used to prevent crime.
There was also a lack of pre-deployment assessment. For instance, the widely-accepted fact that LFR technology has significant biases was never assessed. Pete highlights how the introduction of new technology is often smoothed through softened terminology: “it’s a trial,” for instance. The Met’s use of LFR, however, was a deployment rather than a trial.
So, how should LFR be used in the future, if it should be used at all? From a human rights approach, Daragh thinks what is most important is to consider every deployment on a case-by-case basis, and to recognize the difference between different technologies. He notes the difference between using LFR at borders against a narrow database of people who are known threats and deploying it at a protest. The latter is likely to have a chilling effect on the right to protest and a “corroding effect on democracy”. The most problematic deployment, of course, is the use of LFR via an entire CCTV network.
“The risk is that we sleepwalk in a very different type of society.“Daragh Murray
Pete highlights how thinking about LFR technology from the perspective of data protection is too restrictive. Terrorism or child abuse are often invoked to justify deployment of this technology, but this does not fit with what our guests saw.
Both our guests argue that the biases built into the technology make its use fundamentally problematic, whatever the circumstances. As Pete says, it is a scientific fact that algorithms and LFR technology have several biases: gender, age, race. Knowing that, how can we introduce such technology in the public?
Daragh also points to the impact these technologies have on the relationship between citizens and the police. Previously, the police might have used community-based policing to work with areas affected by crime and address problems as they arose. With mass surveillance, however, you monitor entire populations and perform checks based on reports provided by the surveillance.
“We simply have no oversight or regulation of it. None.”Pete Fussey
So, what is the public’s role in deciding whether LFR tech should be used?
Pete argues that it is not so much about a majority of the public approving a technology, but rather its impact on the most vulnerable segments of the population. Indeed, it is challenging to form an opinion on these topics, given their highly technical nature. If a technology is sold as working perfectly, this influences the opinion we have of it. Daragh adds that prior community engagement is key: in Notting Hill, for instance, nothing was done to explain why the technology was being used.
“We overstate public opinion as a justification for the use of these technologies.”Pete Fussey
Finally, we asked our guests whether LFR could be deployed in compliance with human rights? Daragh thinks that they could be, but only in very narrow cases – airports, for example. However, we do not have sufficient knowledge of the technology to give it a green light. Across a city or in specific public locations he doubts it can ever be compliant with human rights. The chilling effect is key: how will this tech allow people to grow freely, maybe outside the norms? Anything that has the potential to interfere with our freedom to build our own lives should not be implemented.
Nevertheless, Pete thinks that the technology will be used, regardless of how problematic it is, and recommends that implementation is at least temporarily paused so that we can try to implement as many safeguards as possible before a further roll-out.
Veronica is an MPhil student reading Politics and International Studies at the University of Cambridge. Her research is focused on public perceptions of trust in government across democracies and authoritarian regimes. She is originally from Romania but has completed her undergraduate degree at University College London. Her interest in human rights issues and technology stems from her work with the3million, the largest campaign organisation advocating for the rights of EU citizens in the UK.
Pete Fussey is professor of sociology at the University of Essex. Professor Fussey’s research focuses on surveillance, human rights and technology, digital sociology, algorithmic justice, intelligence oversight, technology and policing, and urban studies. He is a director of the Centre for Research into Information, Surveillance and Privacy (CRISP) – a collaboration between surveillance researchers at the universities of St Andrews, Edinburgh, Stirling and Essex – and research director for the ESRC Human Rights, Big Data and Technology project (www.hrbdt.ac.uk). As part of this project Professor Fussey leads research teams empirically analysing digital security strategies in the US, UK, Brazil, India and Germany.
Other work has focused on urban resilience and, separately, organised crime in the EU with particular reference to the trafficking of children for criminal exploitation (authored Child Trafficking in the EU: Policing and protecting Europe’s most vulnerable (Routledge) in 2017). Further books include Securing and Sustaining the Olympic City (Ashgate) Terrorism and the Olympics (Routledge), and a co-authored book on social science research methodology, Researching Crime: Researching Crime Approaches, Methods and Application (Palgrave). He has also co-authored one of the UK’s best selling criminology textbooks (Criminology: A Sociological Introduction) with colleagues from the University of Essex. He is currently contracted by Oxford University Press to author a book entitled “Policing and human rights in the age of AI” (due Spring 2022).
Daragh Murray is a Senior Lecturer at the Human Rights Centre & School of Law, who specialises in international human rights law and the law of armed conflict. He has a particular interest in the use of artificial intelligence and other advanced technologies, particularly in an intelligence agency and law enforcement context. He has been awarded a UKRI Future Leaders Fellowship to examine the impact of artificial intelligence on individual development and the functioning of democratic societies. This 4 year project began in January 2020 and has a particular emphasis on law enforcement, intelligence agency, and military AI applications. Previous research examined the relationship between human rights law and the law of armed conflict and the regulation and engagement of non-State armed groups.
Daragh is currently a member of the Human Rights Big Data & Technology Project, based at the University of Essex Human Rights Centre, and the Open Source for Rights Project, based at the University of Swansea. He also teaches on the Peace Support Operations course run by the International Institute of Humanitarian Law in Sanremo.
He is on the Fight For Humanity Advisory Council, a NGO focused on promoting human rights compliance among armed groups. Daragh has previously worked as head of the International Unit at the Palestinian Centre for Human Rights, based in the Gaza Strip. In 2011, he served as Rapporteur for an Independent Civil Society Fact-Finding Mission to Libya, which visited western Libya in November 2011 in the immediate aftermath of the revolution.
Pete Fussey and Daragh Murray (2019) ‘Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology’
Pete Fussey and Daragh Murray (2020) ‘Policing Uses of Live Facial Recognition in the United Kingdom’ in Amba Kak, ed., Regulating Biometrics: Global Approaches and Urgent Questions (published by the AI Now Institute)
Davide Castelvecchi (2020) ‘Is facial recognition too biased to be let loose?’ (featured in Nature)
In this week’s episode, host Maryam Tanwir and panellist Yasmin Homer discuss the role of technology in the securitization of European borders with MEP Patrick Breyer and researcher Ainhoa Ruiz. It was 71 years ago that the 1951 UN Refugee Convention codified the rights of refugees to seek sanctuary and the obligation of states to protect them. It was in 2015 that Angela Merkel famously declared “wir schaffen das” – “we can do it.” Yet the International Organization for Migration has described 2021 as the deadliest year for migration routes to and within Europe. The creation of Fortress Europe is inserting technology into the heart of the human story of migration. In this episode, we ask what the role role of technology is in the ongoing securitization of the EU’s borders, and what the implications for human rights could be.
In this week’s episode, host Maryam Tanwir and panellist Yasmin Homer discuss the role of technology in the securitization of European borders with MEP Patrick Breyer and researcher Ainhoa Ruiz (see bios below). It was 71 years ago that the 1951 UN Refugee Convention codified the rights of refugees to seek sanctuary and the obligation of states to protect them. It was in 2015 that Angela Merkel famously declared “wir schaffen das” – “we can do it.” Yet the International Organization for Migration has described 2021 as the deadliest year for migration routes to and within Europe since the last deadliest year, which was a very recent 2018. At least 1315 people died making the central Mediterranean crossing, while at least 41 lives were lost at the land border between Turkey and Greece. The creation of Fortress Europe is inserting technology into the heart of the human story of migration, as migrants uproot themselves escaping war, famine, political violence and economic instability to search for a better and safer life, undertaking an increasingly treacherous and unforgiving journey. What is the role of technology in the ongoing securitization of the EU’s borders? What are the implications for human rights?
“Fortress Europe starts ironically with a free movement agreement, which entails enforcing a harder exterior border. In some manner we are telling everyone that us inside this agreement are civilization and outside are barbarians.”Ainhoa Ruiz
The conversation starts off with a summary of the current situation at European borders. Ainhoa notes that ironically, the concept of Fortress Europe can be traced back to the Schengen Agreement, which enforced a division between those inside the zone who were granted free movement, and those outside, whose movement toward Europe was impeded. She also highlights how the notion of European border creates a division between “civilized” Europeans and “barbaric” outsiders, and points to Article 12 of the International Covenant on Civil and Political Rights, which guarantees a right to leave one’s country.
“We are not actually sure about what are they doing with this data. So, these technological walls affect migrants but is also affecting us inside the free movement area.“Ainhoa Ruiz
So, what do Europe’s walls look like? Today, there are three physical walls on EU border. As Ainhoa notes, however, walls are not only physical, but also “mental” and “administrative.” Technology constitutes another wall, not only for people trying to come in to the EU but also for the people living inside the wall, whose movement is being watched. The last “wall” is that constituted by Frontex, which acts as a European border police force, forcing migrants to take ever more dangerous routes.
“The European Union is pouring enormous amounts of resources and money into building this fortress”Patrick Breyer
Patrick highlights that we have just observed the Holocaust Remembrance Day, which ought to be an opportunity for us to collectively reflect on the fact we could all be refugees fleeing terror. For him, the starting point was the Syrian Civil War, when refugees were first perceived as a threat. This played into the hands of authoritarian parties, who were able to impose the theme in public debates, leading in turn democratic parties to follow suit. Both of our guests highlight a climate of fear: fear of globalization, fear of crime – and more. These concerns are compounded in issues of migration.
“They are starting to collect information about our plane travels, but they also want to expand it to train and ferry travels. They are using algorithms that evaluate the risk that we pose based on patterns, that allegedly indicate a risk if we have certain criteria in common with perpetrators in the past.”Ainhoa Ruiz
This fortress is buttressed by the collection of personal data, leading us toward a security society that has more traits in common with China than what we like to admit. Our “mere existence” – as our guest puts it – is surveilled under the pretence of preserving Europe’s security. The military complex has adapted to this situation and is providing these tools, which add up to a significant industry.
Ainhoa reinforces how important the military industry’s role is in pushing for the adoption of these technologies, and notes that EU Member States are guilty of letting the industry participate closely in policy decisions. This contracting-out of security alienates accountability and makes the system opaque and removed from public scrutiny.
Patrick recently won a transparency lawsuit against a security project called iBorderCtrl that was created to evaluate a border technology that forced people entering the EU to answer video questions. The twist is that the technology was supposed to leverage AI to detect lies. He explains how in his view this technology presents a grave problem from a human rights perspective, as the machine can never be reliable enough and would lead to unfair rejections at the border. This technology also runs the risk of being discriminatory: in the past previous face detection technology has proven to be less accurate for people of colour. If deployed, this technology would then be in the market and could be sold to authoritarian governments around the world. This technology has “enormous potential for abuse,” concludes our guest.
“All this research happening in the dark, and they have recurrently been been funding, the development of crowd control and mass surveillance technologies. That this is really a danger to our free and open society.”Patrick Breyer
Can technology and human rights be reconciled in any way? is a sort of more of an equitable balance that can be made? Or is it or is a whole new model completely needed? Ainhoa argues it’s too soon to know, but that clues point to an ever-increasing securitization of technology, with new killer drones invented that can take your life in complete disregard for any human right. Technology, and the companies that develop it, always seem to outpace democratic rulemaking with the complicity of policymakers who let lobbies make the rules.
“We need to stop and try to think about the consequences of all this technology. Technology and companies run faster than society… It is creating more insecurity than the insecurity it claims to fight.“Ainhoa Ruiz
Patrick highlights how the new generation is mobilized, as we see with the climate protests, and could change public discourse. Finally, he explains that living in a securitized environment does not guarantee little crime, far from it. Examples of the US and the UK show that securitization and security do not go hand in hand. Human rights are perfectly compatible with targeted investigation and with security, but they are currently under threat. It should be our role, he concludes, to defend an open and free society.
Yasmin is a second-year undergraduate studying History. She is studying Early Modern Eurasia with an interest in the importance of liminality and “borders” in forming socio-political and cultural identity. Originally from Buckinghamshire, she has engaged with human rights issues since secondary school. After graduating, she aspires to work with international governance concerning peace, gender and security.
Ainhoa Ruiz has been a researcher at the Centre Delàs d’Estudis per la Pau since 2014, with an interest in border militarisation, arms trading and private military companies. She received her doctorate for a thesis on the militarisation and walling of the border space, and has worked in both Colombia and Palestine. Her report “A Walled World, towards a Global Apartheid” warns of the expansion of the border space into both European states and third countries, linking the 1000km of physical walls to virtual walls of surveillance and discourses of violence.
Patrick Breyer is a Member of the European Parliament from the German Piratenpartei. A self-described “digital freedom fighter,” he was elected to the European Parliament in 2019, is an active member of the NGO Working Group on Data Retention, and a member of the Committee on Civil Liberties, Justice and Home Affairs. Patrick recently sought an order from the European Court of Justice to publicly release documents concerning iBorderCtrl, an artificial intelligence technology for scanning and detecting the emotions of migrants crossing EU borders.
Fortress Europe: the millions spent on military-grade tech to deter refugees (The Guardian 2021)
Automated technologies and the future of Fortress Europe (Amnesty International 2019)
Fortress Europe: dispatches from a gated Continent (Matthew Carr 2016)
A Walled World: towards global apartheid (Ainhoa Ruiz, Mark Akkerman, Pere Brunet 2020)
For this week’s episode, host Maryam Tanwir and panelist Nanna Sæten speak about predictive policing with Johannes Heiler, Adviser on Anti-Terrorism Issues at the OSCE Office for Democratic Institutions and Human Rights (ODIHR) and Miri Zilka, Research Associate in the Machine Learning Group at the University of Cambridge. This technology seems to perpetuate existing police bias, but could this be overcome? Who is responsible for the protection of human rights, and how can we decide whose rights to uphold in cases of conflict? What is clear to both of our guests is that there needs to be clear channels of oversight if human rights are to be protected in digitized law enforcement.
For this week’s episode, host Maryam Tanwir and panelist Nanna Sæten speak about predictive policing with Johannes Heiler, Adviser on Anti-Terrorism Issues at the OSCE Office for Democratic Institutions and Human Rights (ODIHR) and Miri Zilka, Research Associate in the Machine Learning Group at the University of Cambridge. Predictive policing leverages the techniques of statistics and machine learning for the purpose of predicting crime. The human rights perspective provides several interesting questions for the use of predictive policing; as the technology functions today, it seems to perpetuate already existing bias in police work, but could this be overcome? Using technology for the purpose of police work necessitates questions of who is responsible for the protection of human rights and how to decide on whose human rights to uphold in the case of conflict. What is clear to both of our guests is that there needs to be clear channels of oversight if human rights are to be protected in digitized law enforcement.
“All of these systems impact human rights.”Johannes Heiler
This episode starts with a definition of the issue at hand. When we speak of predictive policing, we are usually referring to predictive models of the time of place where crime will happen and more generally to all models that attempt to predict crime that has yet to happen. However, Johannes notes it is important to distinguish predictive policing that aims to map crime hotspots, and models that attempt to predict crime at the individual level.
“We don’t know is exactly how they work, we don’t know what type of info they take in, we don’t know the algorithms and most importantly we don’t know how they’re being used by the police“Miri Zilka
Can machine learning help us overcome the existing heuristic biases in policing? Does it not accentuate these existing biases? The issue with AI is that it tends to reify and reproduce human biases that went into the data. Where the police searches for crime there is a risk of additional bias, as the police tends to look for crime in certain areas more than others (victim reporting is not exempt from bias). There is preexisting overpolicing in certain neighbourhoods around the world, and this informs the tools that are used for predictive policing purposes in a feedback loop.
“There are real risks that the datasets that are used in the analysis are tainted by discriminatory policing from the start”. “The bias reproduces itself in the machine learning in a feedback loop. The whole system is built to perpetuate and reinforce discrimination”Johannes Heiler
However, does this mean that predictive policing is in of itself problematic, or simply that its current uses are problematic? Miri argues that the technology itself isn’t the problem, but that its current uses may be deemed problematic indeed. There are “safe uses” of the application that can help law enforcement address people in distress.
“The public might accept the use of certain tools if they are shown that they reap significant benefits.”Miri Zilka
Technology, while often presented as more neutral than human-led processes, is not necessarily so. Both our guests agree that technology reflects the biases of the people designing technological artefacts, something which applies to predictive policing software.
Our guests are then asked about why predictive policing focuses on petty crime rather than on white-collar crime? For both of our guests, some tools are already in place, but their uses are less controversial and thus receive less public attention. And even then, there are issues: for instance, bank account closed without notice and without reason.
It seems to our panelist and both of our guests that in recent years we are moving toward a more proactive type of policing rather than a reactive one. Under the pressure of terrorism, police departments across the world are increasingly trying to prevent crime from happening, rather than simply attempting to punish crime. However, as Johannes explains, “Preventing crime is also a human rights obligation of the state.” This shift thus makes sense, but there is also a price to it. In terrorism cases we target crime that is not yet committed, which raises a lot of issues. Can a crime be judged based on sole intent?
“Bias is inherently human and if systems are built and we select data that machines should use and that will be used for training them than this influences the machine. Technology is presented as objective and unbiased but that isn’t true because it is also socially constructed“Johannes Heiler
On all of these topics, our guests are unanimous on one point: more oversight from policy makers and the public is needed. Technology makes trade-off decisions explicit. As Miri explains, “whatever those tradeoffs and decisions are, they shouldn’t be left to technologists and algorithm designers who don’t have the context or authority to make these decisions”. We also need more public involvement, people should know what these tools do and validate the system. We need to be able to demonstrate whether the system is doing what we want it to do.
The question is who decides are what safeguards are there? To change things for the better, we should ask how we can help the decision makers in decision making processes, rather than replace them. Johannes points to the problem of human use of the tool: border guards for instance don’t understand how their tools work, they haven’t participated to their design. According to him that is a problem: people should be aware of the system and the HR implications. If not, “they will just follow the decisions made by the tech”.
“There is a need for independent oversight.“Johannes Heiler
Miri suggests that perhaps we should rethink our relationship with these technologies: they should be thought of as “binoculars” that help law enforcement see new things but does not remove the decision from officers.
On a more personal note, are our experts worried?
Johannes is worried about the experimental use of technology in general. This tech is being used in conjunction with other techs (facial rec, video analysis, automated license plate readers etc…). The evidence on the accuracy of these systems is not very clear and that is worrying as these tools are “high-risk”.
“Very often things are implemented which are untested and where there are really serious concerns about their implications.“Johannes Heiler
Miri adds that technology does not necessarily mean things get better and that sometimes, it makes things worse—we should work much harder to make sure that the technology implemented is making things better. But to end on an optimistic note, she thinks that it is possible but needs cooperation between policy makers, public and law enforcement.
“Statistics and data and technology can improve outcomes but you have to carefully make sure that is what’s happening because they can also make them worse.“Miri Zilka
Nanna Lilletvedt Sæten is a first-year PhD student in political theory at the Department of Politics and International studies, University of Cambridge. Her research centres around the politics of technology and time. Before coming to Cambridge, Nanna did her MSc on Arendtian violence at the University of Copenhagen and she has previously worked for the Norwegian Embassy in Dublin with issues at the intersection of technology and policy.
Johannes Heiler, Adviser on Anti-Terrorism Issues, OSCE Office for Democratic Institutions and Human Rights (ODIHR) is a human rights professional from Germany who serves as Adviser on Anti-Terrorism Issues in the Human Rights Department of ODIHR. He has worked at ODIHR in different capacities since August 2013, including in the implementation of projects to strengthen the protection of human rights defenders. From 2003 to 2013 he worked at Amnesty International in London, where he was primarily engaged in the human rights law and policy area and conducted advocacy work on a broad range of issues with international and regional human rights mechanisms and institutions, including the United Nations and the Council of Europe.
Miri Zilka is a Research Associate in the Machine Learning Group at the University of Cambridge where she works on Trustworthy Machine Learning. Her research centers around the deployment of algorithmic tools in criminal justice. Before coming to Cambridge, she was a Research Fellow in Machine Learning at the University of Sussex, focusing on fairness, equality, and access. Miri obtained a PhD from the University of Warwick in 2018. She holds an M.Sc. in Physics and a dual B.Sc. in Physics and Biology from Tel Aviv University. Miri was awarded a Leverhulme Early Career Fellowship to develop a human-centric framework for evaluating and mitigating risk in causal models, set to start in May 2022. She is a College Research Associate at King’s College Cambridge and an Associate Fellow at Leverhulme Centre for the Future of Intelligence. Miri is currently on a part-time secondment to the Alan Turing Institute.
In this first episode of Season 6, we gather our panelists to discuss the topics that will be on our minds this season. From predictive policing to biometric data collected from refugees, we’re covering a global range of issues at the cutting edge of human rights advocacy, research and policy.
The Declarations Podcast is back for its sixth season! In this episode we provide an overview of the topics we will be discussing in each of the season’s episodes. Maryam Tanwir, this season’s host, discusses these themes with our panellists, who each present what is at stake.
“Predictive policing contributes to reproducing existing patterns and diverting the focus towards, for example, property crimes and overlooking, for example, white collar crimes.”
The first episode we discussed looks at predictive policing. Predictive policing or predicting crime is not new, in the sense that society and law enforcement have tried to prevent criminal activities for centuries. But today, predictive policing entails leveraging techniques from statistics and machine learning to predict future criminal activity. Data and past criminal activity is used to train algorithms to essentially identify patterns, either hot zones for crime or individuals of interest. So the goal of predictive policing is to prevent crime and better allocate police resources to areas of interest with the idea that technology may help make the policing process fairer and more neutral, bypassing the heuristic bias of the individual police officer. There are a number of human rights issues with predictive policing as it functions today. The kind of data fed into the algorithm is not necessarily neutral, but reflects the past bias of recorded crime in any police registry. Thereby, predictive policing contributes to reproducing existing patterns and diverting the focus towards, for example, property crimes and overlooking offences such as white collar crimes. And this has led to over policing in disproportionate targeting of vulnerable populations, which has serious human rights implications and has led to massive protests. An example is that in early November 2021, the LAPD was forced to discontinue its use of the PredPol software following public outcry. In this episode of Declarations, we will be speaking to human rights experts and academics on the human rights implications of this emergent technology. What happens to the presumption of innocence in predictive policing? How can we secure the right not to be arbitrarily detained or targeted? How do we ensure equality before the law? What does it mean to prevent a crime before it has even been committed?
“The questions of who controls this data, how secure it is and how hard it is for it to be hacked into by various actors are of utmost importance“
We then moved on to a preview of our episode looking at the collection of biometric data on refugees and delving into the case of the Rohingyas in Myanmar. The starting point is that in June, Human Rights Watch released a report stating that the UNHCR improperly collected Rohingya data and shared it with the Myanmar government. This spurred a wide debate about the way in which Rohingya data have been collected, and more generally about how biometric data are collected from refugees. The UN defends these practices as a more dignified way of registering refugees, one that is more secure and efficient to guard against potential fraud and double registration and that appeases concerns about national security that many donor countries have expressed. This is problematic from a human rights perspective. The questions of who controls this data, how secure it is and how secure it is from hacking by various actors are of utmost importance, as is the question of consent and power relations between aid agencies and the refugees. How much can refugees really give informed consent if they don’t know where their data is going? This is happening in lots of different places around the world, in Afghanistan, Kenya, Somalia, Syria, and Yemen as well as Bangladesh.
“These games are caveated by the fact that you can just switch your phone off at any time and tap out of the danger, which is something that is not possible if you’re a refugee”
The next episode we discussed will examine the question of video games which simulate a first-person perspective in refugee camps. Can these be an effective way of raising awareness about this experience and building empathy? Some of them use virtual reality, and radically put the player in the shoes of the refugee. There are games like one named “Bury me my love”, where you are inserted straight into the phone of an anxious husband, as he guides his wife, Nora, from Syria to Europe, modelled off of the published texts of a woman doing the same journey. Some of the other games use VR to give the player a real first-person perspective, and others let you play as an avatar, making life and death decisions throughout the camps. While the idea of these games is to educate people about the migrant experience, the dangerous phase, and the emotions felt, force us to ask how effective this really is at changing perceptions. They could be a fantastic education tool, but we have to ask whether this is not trivializing the refugee experience. These games are caveated by the fact that players can just switch their phones off at any time and tap out of the danger, which is not possible for refugees. In this light, can they really simulate what it would be like to feel the emotions of a refugee? Games are the largest form of media consumed at the moment and need to be seriously considered for their potential benefits like so many of these other technological solutions to human rights issues. It’s far more complicated than a black or white answer.
“Since the turn of the century, migration has increasingly been cast as a security issue, rather than a human or social issue, with borders themselves becoming geopolitical zones of conflict.”
Following that, we moved to a preview of our episode on the securitization of the EU’s borders. Since the turn of the century, migration has increasingly been cast as a security issue, rather than a human or social issue, with borders themselves becoming geopolitical zones of conflict. What some call ‘Fortress Europe’ is a product of decades of investment in the securitization and militarization of Europe’s borders, whose operations are reinforcing the construction of the ‘safe’, internal space of Europe and an ‘unsafe’ external space, institutionalizing reactive suspicion to migrants and asylum seekers rather than humanitarian responsibility. This episode will ask about the relationship between such techno-solutionism and the prevalent discourses of violence and threats that surround migration into Europe. Are they entwined? Does one cause the other? Or are they simply coincidental in a digitalising world? What help or hindrance can the machine’s perspective bring to such a deeply human issue? We will be looking the legality and nuances of this technological development, including its potential challenge to Article Six of the European Convention on Human Rights, the right to a fair trial. An interesting case in this respect is currently before the European Court of Justice concerning video lie detectors being used on migrants crossing into Greece, Latvia and Hungary, which scan facial expressions as they respond to questions to see if migrants are ‘lying‘. We anticipate a result within the next few days of recording, something that will be interesting to return to. With the increasing automation of the border, more and more decisions – decisions on which someone’s life, health and security hinge – are being displaced from the human to the machine.
“The main question we will aim to unpack in our discussion is whether live facial recognition is the path to a surveillance state, or whether it could be reconciled with human rights standard.”
The next episode on our agenda focuses on live facial recognition, a widely debated topic in the past years, both in the UK as well as internationally. Several organizations advocate against the use of this technology based on Article Eight of the Human Rights Act, which aims to protect the right to private life. Academic research on the topic takes a different approach by looking at both the advantages and the disadvantages of this technology in various contexts and focusing more on the public attitudes towards facial recognition. It aims to ask why citizens across countries have different views of how or whether this technology should be used. In short, the main question we will aim to unpack in our discussion is whether live facial recognition is the path to a surveillance state, or whether it could be reconciled with human rights standards. To explore this topic, we hope to bring a wide range of perspectives on the current use of like facial recognition by various institutions, both public and private. We will also ask ourselves which actors should have access to individual’s facial recognition biometric data – should it be the government or the police for security reasons? Could this be extended to private companies under any circumstances? We also seek to find out how much of a say should the public have on the use of this technology and whether or not they are sufficiently informed about it at the moment. Finally, and perhaps most importantly, what should our aims be regarding live facial recognition in the future? Is there a way to deploy it in a human rights compliant manner, or should it be abolished completely?
“Some American estimates say AI could displace a quarter of all jobs.“
We then begin to explore a frequently discussed and contested aspect of artificial intelligence: its relationship with employment and how it is already and could continue to cause mass redundancies in many fields, which we will look at from a human rights perspective. Some American estimates say AI could displace a quarter of all jobs. While it will certainly create new jobs, its overall effect is still unclear: what is certain is that there will be a great shift in the job landscape. We will be considering whether human rights might be fundamental in the future, as we reconcile the progress of AI with the protection of employment, careers and workers. This topic brings up a lot of interesting issues, the answers of which aren’t really clear at all. One key issue is whether there is a human right to work in the first place, and whether AI replacing jobs on potentially a very wide scale undermines this right or breaches it. Do current international human rights instruments cater to this situation? If there is no such right, should there be? Even if we can say there is a relevant human right, what can governments across the world be expected to do to uphold this right? How do they protect jobs? Can we hope the progress of AI to protect workers? In a way there is a fundamental tension between balancing technological advances and the benefits they can bring with their impact on certain groups in society.
“We are going to be exploring this topic not just through an academic point of view, but also through on-the-ground experience, thinking about how women can protect themselves and the often-exploitative nature of the industry.“
The conversation then moved onto our upcoming episode on deep fakes. Deep fakes are videos in which the face of the actor is swapped for another face. The person manufacturing the video is then able to control your facial expressions and what you do, which often results in those affected performing actions to which they have not consented. Deep fakes have gained a lot of popularity in recent years: during the 2020 elections we saw fake videos of Donald Trump saying outrageous things, or Mark Zuckerberg making some unsavoury comments. But what becomes extremely problematic is when we follow where the money goes, which isn’t to politics, but to the adult entertainment industry, and particularly the porn industry. What we’ve noticed is that research shows that 90 to 95% of deep fake videos online are actually non-consensual porn, and 90% of that is actually non-consensual porn involving women – a horrifying statistic. In this episode, we are going to be exploring this topic not just through an academic point of view, but also through on-the-ground experience, thinking about how women can protect themselves and the often-exploitative nature of the industry. This issue is especially important because since 2015 the UK has made revenge porn illegal, but current legislation does not encompass new technology such as deep fakes, leading the UK Law Commission to start a review process of the law.
The final episode we discussed will look at internet shutdowns in Pakistan. We will be speaking with Pakistani activists who are moving the needle, creating awareness about human rights and human rights violations.
The entire podcast team is looking forward to discussing these fascinating topics with our panellists and their guests. Stay tuned!