The Declarations Podcast is back for its sixth season! In this episode we provide an overview of the topics we will be discussing in each of the season’s episodes. Maryam Tanwir, this season’s host, discusses these themes with our panellists, who each present what is at stake.
“Predictive policing contributes to reproducing existing patterns and diverting the focus towards, for example, property crimes and overlooking, for example, white collar crimes.”
The first episode we discussed looks at predictive policing. Predictive policing or predicting crime is not new, in the sense that society and law enforcement have tried to prevent criminal activities for centuries. But today, predictive policing entails leveraging techniques from statistics and machine learning to predict future criminal activity. Data and past criminal activity is used to train algorithms to essentially identify patterns, either hot zones for crime or individuals of interest. So the goal of predictive policing is to prevent crime and better allocate police resources to areas of interest with the idea that technology may help make the policing process fairer and more neutral, bypassing the heuristic bias of the individual police officer. There are a number of human rights issues with predictive policing as it functions today. The kind of data fed into the algorithm is not necessarily neutral, but reflects the past bias of recorded crime in any police registry. Thereby, predictive policing contributes to reproducing existing patterns and diverting the focus towards, for example, property crimes and overlooking offences such as white collar crimes. And this has led to over policing in disproportionate targeting of vulnerable populations, which has serious human rights implications and has led to massive protests. An example is that in early November 2021, the LAPD was forced to discontinue its use of the PredPol software following public outcry. In this episode of Declarations, we will be speaking to human rights experts and academics on the human rights implications of this emergent technology. What happens to the presumption of innocence in predictive policing? How can we secure the right not to be arbitrarily detained or targeted? How do we ensure equality before the law? What does it mean to prevent a crime before it has even been committed?
“The questions of who controls this data, how secure it is and how hard it is for it to be hacked into by various actors are of utmost importance“
We then moved on to a preview of our episode looking at the collection of biometric data on refugees and delving into the case of the Rohingyas in Myanmar. The starting point is that in June, Human Rights Watch released a report stating that the UNHCR improperly collected Rohingya data and shared it with the Myanmar government. This spurred a wide debate about the way in which Rohingya data have been collected, and more generally about how biometric data are collected from refugees. The UN defends these practices as a more dignified way of registering refugees, one that is more secure and efficient to guard against potential fraud and double registration and that appeases concerns about national security that many donor countries have expressed. This is problematic from a human rights perspective. The questions of who controls this data, how secure it is and how secure it is from hacking by various actors are of utmost importance, as is the question of consent and power relations between aid agencies and the refugees. How much can refugees really give informed consent if they don’t know where their data is going? This is happening in lots of different places around the world, in Afghanistan, Kenya, Somalia, Syria, and Yemen as well as Bangladesh.
“These games are caveated by the fact that you can just switch your phone off at any time and tap out of the danger, which is something that is not possible if you’re a refugee”
The next episode we discussed will examine the question of video games which simulate a first-person perspective in refugee camps. Can these be an effective way of raising awareness about this experience and building empathy? Some of them use virtual reality, and radically put the player in the shoes of the refugee. There are games like one named “Bury me my love”, where you are inserted straight into the phone of an anxious husband, as he guides his wife, Nora, from Syria to Europe, modelled off of the published texts of a woman doing the same journey. Some of the other games use VR to give the player a real first-person perspective, and others let you play as an avatar, making life and death decisions throughout the camps. While the idea of these games is to educate people about the migrant experience, the dangerous phase, and the emotions felt, force us to ask how effective this really is at changing perceptions. They could be a fantastic education tool, but we have to ask whether this is not trivializing the refugee experience. These games are caveated by the fact that players can just switch their phones off at any time and tap out of the danger, which is not possible for refugees. In this light, can they really simulate what it would be like to feel the emotions of a refugee? Games are the largest form of media consumed at the moment and need to be seriously considered for their potential benefits like so many of these other technological solutions to human rights issues. It’s far more complicated than a black or white answer.
“Since the turn of the century, migration has increasingly been cast as a security issue, rather than a human or social issue, with borders themselves becoming geopolitical zones of conflict.”
Following that, we moved to a preview of our episode on the securitization of the EU’s borders. Since the turn of the century, migration has increasingly been cast as a security issue, rather than a human or social issue, with borders themselves becoming geopolitical zones of conflict. What some call ‘Fortress Europe’ is a product of decades of investment in the securitization and militarization of Europe’s borders, whose operations are reinforcing the construction of the ‘safe’, internal space of Europe and an ‘unsafe’ external space, institutionalizing reactive suspicion to migrants and asylum seekers rather than humanitarian responsibility. This episode will ask about the relationship between such techno-solutionism and the prevalent discourses of violence and threats that surround migration into Europe. Are they entwined? Does one cause the other? Or are they simply coincidental in a digitalising world? What help or hindrance can the machine’s perspective bring to such a deeply human issue? We will be looking the legality and nuances of this technological development, including its potential challenge to Article Six of the European Convention on Human Rights, the right to a fair trial. An interesting case in this respect is currently before the European Court of Justice concerning video lie detectors being used on migrants crossing into Greece, Latvia and Hungary, which scan facial expressions as they respond to questions to see if migrants are ‘lying‘. We anticipate a result within the next few days of recording, something that will be interesting to return to. With the increasing automation of the border, more and more decisions – decisions on which someone’s life, health and security hinge – are being displaced from the human to the machine.
“The main question we will aim to unpack in our discussion is whether live facial recognition is the path to a surveillance state, or whether it could be reconciled with human rights standard.”
The next episode on our agenda focuses on live facial recognition, a widely debated topic in the past years, both in the UK as well as internationally. Several organizations advocate against the use of this technology based on Article Eight of the Human Rights Act, which aims to protect the right to private life. Academic research on the topic takes a different approach by looking at both the advantages and the disadvantages of this technology in various contexts and focusing more on the public attitudes towards facial recognition. It aims to ask why citizens across countries have different views of how or whether this technology should be used. In short, the main question we will aim to unpack in our discussion is whether live facial recognition is the path to a surveillance state, or whether it could be reconciled with human rights standards. To explore this topic, we hope to bring a wide range of perspectives on the current use of like facial recognition by various institutions, both public and private. We will also ask ourselves which actors should have access to individual’s facial recognition biometric data – should it be the government or the police for security reasons? Could this be extended to private companies under any circumstances? We also seek to find out how much of a say should the public have on the use of this technology and whether or not they are sufficiently informed about it at the moment. Finally, and perhaps most importantly, what should our aims be regarding live facial recognition in the future? Is there a way to deploy it in a human rights compliant manner, or should it be abolished completely?
“Some American estimates say AI could displace a quarter of all jobs.“
We then begin to explore a frequently discussed and contested aspect of artificial intelligence: its relationship with employment and how it is already and could continue to cause mass redundancies in many fields, which we will look at from a human rights perspective. Some American estimates say AI could displace a quarter of all jobs. While it will certainly create new jobs, its overall effect is still unclear: what is certain is that there will be a great shift in the job landscape. We will be considering whether human rights might be fundamental in the future, as we reconcile the progress of AI with the protection of employment, careers and workers. This topic brings up a lot of interesting issues, the answers of which aren’t really clear at all. One key issue is whether there is a human right to work in the first place, and whether AI replacing jobs on potentially a very wide scale undermines this right or breaches it. Do current international human rights instruments cater to this situation? If there is no such right, should there be? Even if we can say there is a relevant human right, what can governments across the world be expected to do to uphold this right? How do they protect jobs? Can we hope the progress of AI to protect workers? In a way there is a fundamental tension between balancing technological advances and the benefits they can bring with their impact on certain groups in society.
“We are going to be exploring this topic not just through an academic point of view, but also through on-the-ground experience, thinking about how women can protect themselves and the often-exploitative nature of the industry.“
The conversation then moved onto our upcoming episode on deep fakes. Deep fakes are videos in which the face of the actor is swapped for another face. The person manufacturing the video is then able to control your facial expressions and what you do, which often results in those affected performing actions to which they have not consented. Deep fakes have gained a lot of popularity in recent years: during the 2020 elections we saw fake videos of Donald Trump saying outrageous things, or Mark Zuckerberg making some unsavoury comments. But what becomes extremely problematic is when we follow where the money goes, which isn’t to politics, but to the adult entertainment industry, and particularly the porn industry. What we’ve noticed is that research shows that 90 to 95% of deep fake videos online are actually non-consensual porn, and 90% of that is actually non-consensual porn involving women – a horrifying statistic. In this episode, we are going to be exploring this topic not just through an academic point of view, but also through on-the-ground experience, thinking about how women can protect themselves and the often-exploitative nature of the industry. This issue is especially important because since 2015 the UK has made revenge porn illegal, but current legislation does not encompass new technology such as deep fakes, leading the UK Law Commission to start a review process of the law.
The final episode we discussed will look at internet shutdowns in Pakistan. We will be speaking with Pakistani activists who are moving the needle, creating awareness about human rights and human rights violations.
The entire podcast team is looking forward to discussing these fascinating topics with our panellists and their guests. Stay tuned!