Posted on

Season 6 Episode 3 – Live Facial Recognition

The third episode of this season of the Declarations Podcast delves into the topic of live facial recognition. Host Maryam Tanwir and panelist Veronica-Nicolle Hera sat down with Daragh Murray and Pete Fussey, who co-authored the Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology in July 2019. Live facial recognition has been a widely debated topic in the past years, both in the UK and internationally. While several campaigning organisations advocate against the use of this technology based on the Prohibition of Discrimination set out in human rights law, independent academic research on the topic reveals important insights into trials of this technology. Our guests are at the forefront of this research, and present some of their findings in this episode.

We kick off the episode with definitional issues. What is facial recognition technology? Pete explains that, when we speak of “facial recognition”, we are in fact referring to several technologies: one-to-one technology (such as that used to unlock smartphones or to clear passport gates in airports) that do not need databases, and one-to-many systems like live facial recognition (LFR) that compares images of passers-by against databases. 

“[Live Facial Recognition] is widely seen to be more intrusive than other forms of surveillance.”

Pete Fussey

Some of this live facial recognition is retrospective, as in the US, and some takes place constantly in real-time (as in the CCTV used in Chinese cities). These different types of facial recognition, both guests emphasize, have different implications. One-to-many live facial recognition systems are seen as more “intrusive” and “intimate”, their impact magnified by their ability to process images for several thousand people in a day. 

[Live Facial Recognition] has the potential to be an extremely invasive tool and… to really change the balance of power between state and citizens.

Daragh Murray

Comparing LFR with past surveillance technologies reveals an exponential increase in state surveillance capacities. The East German “Stasi”, for instance – widely considered one of the most sophisticated surveillance apparatuses – had access to only a fraction of the information that can be collected today. That’s why consideration of the human rights impact of these technologies is essential.

As Pete notes, we have often begun to review new technology years after it is initially deployed, and tend to look at a single aspect rather than examining a technology more broadly. For instance, Pete highlights how we look at technologies with a focus on authorisation decisions, even though their uses are likely to change over time. Potential future uses, therefore, need to be factored into our analysis.

There needs to be proper thinking about the whole life-cycle of these technologies.

Pete Fussey

Veronica then asked our guests to discuss their recent research on the Metropolitan police’s practices. The Met has been trialing LFR since 2016, when it was first deployed during the Notting Hill carnival. Pete highlights how important it is to see technology into context: it is hard to anticipate the direction a certain technology will take, and it is only through use that we can see its nuances.

The report they co-authored blended sociology and human rights law. From a sociological perspective, their main finding is that the human adjudication that was considered essential to the technology’s compliance with domestic and international law was close to non-existent. As Pete told us, “There is adjudication, but it’s not meaningful”.

Regarding human rights, Daragh outlines the 3-part test used to evaluate a potential inconsistency with human rights law. From the human rights perspective, new technology ought to (1) comply with local law, (2) have a legitimate aim, and (3) be necessary in a democratic society. Human rights law is about protection from arbitrary intervention by the state.

There is adjudication, but it’s not meaningful.”

Pete Fussey

Regarding human rights, Daragh outlines the 3-part test used to evaluate a potential inconsistency with human rights law. From the human rights perspective, new technology ought to (1) comply with local law, (2) have a legitimate aim, and (3) be necessary in a democratic society. Human rights law is about protection from arbitrary intervention by the state.

From this perspective, the report’s main finding hinged on compliance with law. This is delicate, as there is no law directly regulating LFR; the only law that exists stems from common law, which stipulated that the police ought to “undertake activities necessary to protect the public.” How can this ensure that LFR is not deployed for arbitrary use? The report concluded that the Metropolitan Police’s deployment of LFR was most likely unlawful, as it probably does not comply with the UK’s Human Rights Act. Indeed, a court in South Wales found that a similar deployment by the South Wales police was unlawful. 

We concluded that it was unlikely the Met’s deployment of LFR would be lawful.

Daragh Murray

As for the third important test – necessity in a democracy – there are conflicting norms: protection vs. privacy. In short, you have to demonstrate the technologies’ utility against potential harm. In this circumstance, this would involve showing how LFR could be used to prevent crime.

There was also a lack of pre-deployment assessment. For instance, the widely-accepted fact that LFR technology has significant biases was never assessed. Pete highlights how the introduction of new technology is often smoothed through softened terminology: “it’s a trial,” for instance. The Met’s use of LFR, however, was a deployment rather than a trial.

So, how should LFR be used in the future, if it should be used at all? From a human rights approach, Daragh thinks what is most important is to consider every deployment on a case-by-case basis, and to recognize the difference between different technologies. He notes the difference between using LFR at borders against a narrow database of people who are known threats and deploying it at a protest. The latter is likely to have a chilling effect on the right to protest and a “corroding effect on democracy”. The most problematic deployment, of course, is the use of LFR via an entire CCTV network.

The risk is that we sleepwalk in a very different type of society.

Daragh Murray

Pete highlights how thinking about LFR technology from the perspective of data protection is too restrictive. Terrorism or child abuse are often invoked to justify deployment of this technology, but this does not fit with what our guests saw.

Both our guests argue that the biases built into the technology make its use fundamentally problematic, whatever the circumstances. As Pete says, it is a scientific fact that algorithms and LFR technology have several biases: gender, age, race. Knowing that, how can we introduce such technology in the public? 

Daragh also points to the impact these technologies have on the relationship between citizens and the police. Previously, the police might have used community-based policing to work with areas affected by crime and address problems as they arose. With mass surveillance, however, you monitor entire populations and perform checks based on reports provided by the surveillance.  

“We simply have no oversight or regulation of it. None.”

Pete Fussey

So, what is the public’s role in deciding whether LFR tech should be used? 

Pete argues that it is not so much about a majority of the public approving a technology, but rather its impact on the most vulnerable segments of the population. Indeed, it is challenging to form an opinion on these topics, given their highly technical nature. If a technology is sold as working perfectly, this influences the opinion we have of it. Daragh adds that prior community engagement is key: in Notting Hill, for instance, nothing was done to explain why the technology was being used.

“We overstate public opinion as a justification for the use of these technologies.”

Pete Fussey

Finally, we asked our guests whether LFR could be deployed in compliance with human rights? Daragh thinks that they could be, but only in very narrow cases – airports, for example. However, we do not have sufficient knowledge of the technology to give it a green light. Across a city or in specific public locations he doubts it can ever be compliant with human rights. The chilling effect is key: how will this tech allow people to grow freely, maybe outside the norms? Anything that has the potential to interfere with our freedom to build our own lives should not be implemented.

Nevertheless, Pete thinks that the technology will be used, regardless of how problematic it is, and recommends that implementation is at least temporarily paused so that we can try to implement as many safeguards as possible before a further roll-out.

Our Panelist:

Veronica is an MPhil student reading Politics and International Studies at the University of Cambridge. Her research is focused on public perceptions of trust in government across democracies and authoritarian regimes. She is originally from Romania but has completed her undergraduate degree at University College London. Her interest in human rights issues and technology stems from her work with the3million, the largest campaign organisation advocating for the rights of EU citizens in the UK.

Our guests:

Pete Fussey is professor of sociology at the University of Essex. Professor Fussey’s research focuses on surveillance, human rights and technology, digital sociology, algorithmic justice, intelligence oversight, technology and policing, and urban studies. He is a director of the Centre for Research into Information, Surveillance and Privacy (CRISP) – a collaboration between surveillance researchers at the universities of St Andrews, Edinburgh, Stirling and Essex – and research director for the ESRC Human Rights, Big Data and Technology project (www.hrbdt.ac.uk). As part of this project Professor Fussey leads research teams empirically analysing digital security strategies in the US, UK, Brazil, India and Germany.

Other work has focused on urban resilience and, separately, organised crime in the EU with particular reference to the trafficking of children for criminal exploitation (authored Child Trafficking in the EU: Policing and protecting Europe’s most vulnerable (Routledge) in 2017). Further books include Securing and Sustaining the Olympic City (Ashgate) Terrorism and the Olympics (Routledge), and a co-authored book on social science research methodology, Researching Crime: Researching Crime Approaches, Methods and Application (Palgrave). He has also co-authored one of the UK’s best selling criminology textbooks (Criminology: A Sociological Introduction) with colleagues from the University of Essex. He is currently contracted by Oxford University Press to author a book entitled “Policing and human rights in the age of AI” (due Spring 2022). 

Daragh Murray is a Senior Lecturer at the Human Rights Centre & School of Law, who specialises in international human rights law and the law of armed conflict. He has a particular interest in the use of artificial intelligence and other advanced technologies, particularly in an intelligence agency and law enforcement context. He has been awarded a UKRI Future Leaders Fellowship to examine the impact of artificial intelligence on individual development and the functioning of democratic societies. This 4 year project began in January 2020 and has a particular emphasis on law enforcement, intelligence agency, and military AI applications. Previous research examined the relationship between human rights law and the law of armed conflict and the regulation and engagement of non-State armed groups.

Daragh is currently a member of the Human Rights Big Data & Technology Project, based at the University of Essex Human Rights Centre, and the Open Source for Rights Project, based at the University of Swansea. He also teaches on the Peace Support Operations course run by the International Institute of Humanitarian Law in Sanremo.

He is on the Fight For Humanity Advisory Council, a NGO focused on promoting human rights compliance among armed groups. Daragh has previously worked as head of the International Unit at the Palestinian Centre for Human Rights, based in the Gaza Strip. In 2011, he served as Rapporteur for an Independent Civil Society Fact-Finding Mission to Libya, which visited western Libya in November 2011 in the immediate aftermath of the revolution.

Further reading

Pete Fussey and Daragh Murray (2019) ‘Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology’

Pete Fussey and Daragh Murray (2020) ‘Policing Uses of Live Facial Recognition in the United Kingdom’ in Amba Kak, ed., Regulating Biometrics: Global Approaches and Urgent Questions (published by the AI Now Institute)

Davide Castelvecchi (2020) ‘Is facial recognition too biased to be let loose?’ (featured in Nature)

Posted on

Season 6 Episode 2 – Fortress Europe

In this week’s episode, host Maryam Tanwir and panellist Yasmin Homer discuss the role of technology in the securitization of European borders with MEP Patrick Breyer and researcher Ainhoa Ruiz (see bios below). It was 71 years ago that the 1951 UN Refugee Convention codified the rights of refugees to seek sanctuary and the obligation of states to protect them. It was in 2015 that Angela Merkel famously declared “wir schaffen das” – “we can do it.” Yet the International Organization for Migration has described 2021 as the deadliest year for migration routes to and within Europe since the last deadliest year, which was a very recent 2018. At least 1315 people died making the central Mediterranean crossing, while at least 41 lives were lost at the land border between Turkey and Greece. The creation of Fortress Europe is inserting technology into the heart of the human story of migration, as migrants uproot themselves escaping war, famine, political violence and economic instability to search for a better and safer life, undertaking an increasingly treacherous and unforgiving journey. What is the role of technology in the ongoing securitization of the EU’s borders? What are the implications for human rights? 

Fortress Europe starts ironically with a free movement agreement, which entails enforcing a harder exterior border. In some manner we are telling everyone that us inside this agreement are civilization and outside are barbarians.

Ainhoa Ruiz

The conversation starts off with a summary of the current situation at European borders. Ainhoa notes that ironically, the concept of Fortress Europe can be traced back to the Schengen Agreement, which enforced a division between those inside the zone who were granted free movement, and those outside, whose movement toward Europe was impeded. She also highlights how the notion of European border creates a division between “civilized” Europeans and “barbaric” outsiders, and points to Article 12 of the International Covenant on Civil and Political Rights, which guarantees a right to leave one’s country.

“We are not actually sure about what are they doing with this data. So, these technological walls affect migrants but is also affecting us inside the free movement area.

Ainhoa Ruiz

So, what do Europe’s walls look like? Today, there are three physical walls on EU border. As Ainhoa notes, however, walls are not only physical, but also “mental” and “administrative.” Technology constitutes another wall, not only for people trying to come in to the EU but also for the people living inside the wall, whose movement is being watched. The last “wall” is that constituted by Frontex, which acts as a European border police force, forcing migrants to take ever more dangerous routes. 

The European Union is pouring enormous amounts of resources and money into building this fortress

Patrick Breyer

Patrick highlights that we have just observed the Holocaust Remembrance Day, which ought to be an opportunity for us to collectively reflect on the fact we could all be refugees fleeing terror. For him, the starting point was the Syrian Civil War, when refugees were first perceived as a threat. This played into the hands of authoritarian parties, who were able to impose the theme in public debates, leading in turn democratic parties to follow suit. Both of our guests highlight a climate of fear: fear of globalization, fear of crime – and more. These concerns are compounded in issues of migration. 

They are starting to collect information about our plane travels, but they also want to expand it to train and ferry travels. They are using algorithms that evaluate the risk that we pose based on patterns, that allegedly indicate a risk if we have certain criteria in common with perpetrators in the past.

Ainhoa Ruiz

This fortress is buttressed by the collection of personal data, leading us toward a security society that has more traits in common with China than what we like to admit. Our “mere existence” – as our guest puts it – is surveilled under the pretence of preserving Europe’s security. The military complex has adapted to this situation and is providing these tools, which add up to a significant industry. 

Ainhoa reinforces how important the military industry’s role is in pushing for the adoption of these technologies, and notes that EU Member States are guilty of letting the industry participate closely in policy decisions. This contracting-out of security alienates accountability and makes the system opaque and removed from public scrutiny. 

Patrick recently won a transparency lawsuit against a security project called iBorderCtrl that was created to evaluate a border technology that forced people entering the EU to answer video questions. The twist is that the technology was supposed to leverage AI to detect lies. He explains how in his view this technology presents a grave problem from a human rights perspective, as the machine can never be reliable enough and would lead to unfair rejections at the border. This technology also runs the risk of being discriminatory: in the past previous face detection technology has proven to be less accurate for people of colour. If deployed, this technology would then be in the market and could be sold to authoritarian governments around the world. This technology has “enormous potential for abuse,” concludes our guest.

All this research happening in the dark, and they have recurrently been been funding, the development of crowd control and mass surveillance technologies. That this is really a danger to our free and open society.”

Patrick Breyer

Can technology and human rights be reconciled in any way? is a sort of more of an equitable balance that can be made? Or is it or is a whole new model completely needed? Ainhoa argues it’s too soon to know, but that clues point to an ever-increasing securitization of technology, with new killer drones invented that can take your life in complete disregard for any human right. Technology, and the companies that develop it, always seem to outpace democratic rulemaking with the complicity of policymakers who let lobbies make the rules. 

We need to stop and try to think about the consequences of all this technology. Technology and companies run faster than society… It is creating more insecurity than the insecurity it claims to fight.

Ainhoa Ruiz

Patrick highlights how the new generation is mobilized, as we see with the climate protests, and could change public discourse. Finally, he explains that living in a securitized environment does not guarantee little crime, far from it. Examples of the US and the UK show that securitization and security do not go hand in hand. Human rights are perfectly compatible with targeted investigation and with security, but they are currently under threat. It should be our role, he concludes, to defend an open and free society.

Our Panelist:

Yasmin is a second-year undergraduate studying History. She is studying Early Modern Eurasia with an interest in the importance of liminality and “borders” in forming socio-political and cultural identity. Originally from Buckinghamshire, she has engaged with human rights issues since secondary school. After graduating, she aspires to work with international governance concerning peace, gender and security.

Our guests:

Ainhoa Ruiz has been a researcher at the Centre Delàs d’Estudis per la Pau since 2014, with an interest in border militarisation, arms trading and private military companies. She received her doctorate for a thesis on the militarisation and walling of the border space, and has worked in both Colombia and Palestine. Her report “A Walled World, towards a Global Apartheid” warns of the expansion of the border space into both European states and third countries, linking the 1000km of physical walls to virtual walls of surveillance and discourses of violence.

Patrick Breyer is a Member of the European Parliament from the German Piratenpartei. A self-described “digital freedom fighter,” he was elected to the European Parliament in 2019, is an active member of the NGO Working Group on Data Retention, and a member of the Committee on Civil Liberties, Justice and Home Affairs. Patrick recently sought an order from the European Court of Justice to publicly release documents concerning iBorderCtrl, an artificial intelligence technology for scanning and detecting the emotions of migrants crossing EU borders.

Further reading

Fortress Europe: the millions spent on military-grade tech to deter refugees (The Guardian 2021)

Automated technologies and the future of Fortress Europe (Amnesty International 2019)

Fortress Europe: dispatches from a gated Continent (Matthew Carr 2016)

A Walled World: towards global apartheid (Ainhoa Ruiz, Mark Akkerman, Pere Brunet 2020)

Posted on

Season 6 Episode 1 – Predictive Policing

For this week’s episode, host Maryam Tanwir and panelist Nanna Sæten speak about predictive policing with Johannes Heiler, Adviser on Anti-Terrorism Issues at the OSCE Office for Democratic Institutions and Human Rights (ODIHR) and Miri Zilka, Research Associate in the Machine Learning Group at the University of Cambridge. Predictive policing leverages the techniques of statistics and machine learning for the purpose of predicting crime. The human rights perspective provides several interesting questions for the use of predictive policing; as the technology functions today, it seems to perpetuate already existing bias in police work, but could this be overcome? Using technology for the purpose of police work necessitates questions of who is responsible for the protection of human rights and how to decide on whose human rights to uphold in the case of conflict. What is clear to both of our guests is that there needs to be clear channels of oversight if human rights are to be protected in digitized law enforcement.

All of these systems impact human rights.”

Johannes Heiler

This episode starts with a definition of the issue at hand. When we speak of predictive policing, we are usually referring to predictive models of the time of place where crime will happen and more generally to all models that attempt to predict crime that has yet to happen. However, Johannes notes it is important to distinguish predictive policing that aims to map crime hotspots, and models that attempt to predict crime at the individual level. 

“We don’t know is exactly how they work, we don’t know what type of info they take in, we don’t know the algorithms and most importantly we don’t know how they’re being used by the police

Miri Zilka

Can machine learning help us overcome the existing heuristic biases in policing? Does it not accentuate these existing biases? The issue with AI is that it tends to reify and reproduce human biases that went into the data. Where the police searches for crime there is a risk of additional bias, as the police tends to look for crime in certain areas more than others (victim reporting is not exempt from bias). There is preexisting overpolicing in certain neighbourhoods around the world, and this informs the tools that are used for predictive policing purposes in a feedback loop.

There are real risks that the datasets that are used in the analysis are tainted by discriminatory policing from the start”. “The bias reproduces itself in the machine learning in a feedback loop. The whole system is built to perpetuate and reinforce discrimination

 Johannes Heiler

However, does this mean that predictive policing is in of itself problematic, or simply that its current uses are problematic? Miri argues that the technology itself isn’t the problem, but that its current uses may be deemed problematic indeed. There are “safe uses” of the application that can help law enforcement address people in distress.

The public might accept the use of certain tools if they are shown that they reap significant benefits.”

Miri Zilka

Technology, while often presented as more neutral than human-led processes, is not necessarily so. Both our guests agree that technology reflects the biases of the people designing technological artefacts, something which applies to predictive policing software.

Our guests are then asked about why predictive policing focuses on petty crime rather than on white-collar crime? For both of our guests, some tools are already in place, but their uses are less controversial and thus receive less public attention. And even then, there are issues: for instance, bank account closed without notice and without reason.

It seems to our panelist and both of our guests that in recent years we are moving toward a more proactive type of policing rather than a reactive one. Under the pressure of terrorism, police departments across the world are increasingly trying to prevent crime from happening, rather than simply attempting to punish crime. However, as Johannes explains, “Preventing crime is also a human rights obligation of the state.” This shift thus makes sense, but there is also a price to it. In terrorism cases we target crime that is not yet committed, which raises a lot of issues. Can a crime be judged based on sole intent?

Bias is inherently human and if systems are built and we select data that machines should use and that will be used for training them than this influences the machine. Technology is presented as objective and unbiased but that isn’t true because it is also socially constructed

Johannes Heiler

On all of these topics, our guests are unanimous on one point: more oversight from policy makers and the public is needed. Technology makes trade-off decisions explicit. As Miri explains, “whatever those tradeoffs and decisions are, they shouldn’t be left to technologists and algorithm designers who don’t have the context or authority to make these decisions”. We also need more public involvement, people should know what these tools do and validate the system. We need to be able to demonstrate whether the system is doing what we want it to do.

The question is who decides are what safeguards are there? To change things for the better, we should ask how we can help the decision makers in decision making processes, rather than replace them. Johannes points to the problem of human use of the tool: border guards for instance don’t understand how their tools work, they haven’t participated to their design. According to him that is a problem: people should be aware of the system and the HR implications. If not, “they will just follow the decisions made by the tech”.

There is a need for independent oversight.

Johannes Heiler

Miri suggests that perhaps we should rethink our relationship with these technologies: they should be thought of as “binoculars” that help law enforcement see new things but does not remove the decision from officers.

On a more personal note, are our experts worried?

Johannes is worried about the experimental use of technology in general. This tech is being used in conjunction with other techs (facial rec, video analysis, automated license plate readers etc…). The evidence on the accuracy of these systems is not very clear and that is worrying as these tools are “high-risk”.

Very often things are implemented which are untested and where there are really serious concerns about their implications.

Johannes Heiler

Miri adds that technology does not necessarily mean things get better and that sometimes, it makes things worse—we should work much harder to make sure that the technology implemented is making things better. But to end on an optimistic note, she thinks that it is possible but needs cooperation between policy makers, public and law enforcement.

Statistics and data and technology can improve outcomes but you have to carefully make sure that is what’s happening because they can also make them worse.

Miri Zilka

Our Panelist:

Nanna Lilletvedt Sæten is a first-year PhD student in political theory at the Department of Politics and International studies, University of Cambridge. Her research centres around the politics of technology and time. Before coming to Cambridge, Nanna did her MSc on Arendtian violence at the University of Copenhagen and she has previously worked for the Norwegian Embassy in Dublin with issues at the intersection of technology and policy.

Our guests:

Johannes Heiler, Adviser on Anti-Terrorism Issues, OSCE Office for Democratic Institutions and Human Rights (ODIHR) is a human rights professional from Germany who serves as Adviser on Anti-Terrorism Issues in the Human Rights Department of ODIHR. He has worked at ODIHR in different capacities since August 2013, including in the implementation of projects to strengthen the protection of human rights defenders. From 2003 to 2013 he worked at Amnesty International in London, where he was primarily engaged in the human rights law and policy area and conducted advocacy work on a broad range of issues with international and regional human rights mechanisms and institutions, including the United Nations and the Council of Europe.

Miri Zilka is a Research Associate in the Machine Learning Group at the University of Cambridge where she works on Trustworthy Machine Learning. Her research centers around the deployment of algorithmic tools in criminal justice. Before coming to Cambridge, she was a Research Fellow in Machine Learning at the University of Sussex, focusing on fairness, equality, and access. Miri obtained a PhD from the University of Warwick in 2018. She holds an M.Sc. in Physics and a dual B.Sc. in Physics and Biology from Tel Aviv University. Miri was awarded a Leverhulme Early Career Fellowship to develop a human-centric framework for evaluating and mitigating risk in causal models, set to start in May 2022. She is a College Research Associate at King’s College Cambridge and an Associate Fellow at Leverhulme Centre for the Future of Intelligence. Miri is currently on a part-time secondment to the Alan Turing Institute.

Further reading

O’Neil, Cathy. Weapons of Math Destruction

Benjamin, Ruha. Race after Technology.

Noble, Safiya. Algorithms of Oppression.

Posted on

Season 6 Launch episode – Keeping Up: Human Rights in an Age of New Technologies

The Declarations Podcast is back for its sixth season! In this episode we provide an overview of the topics we will be discussing in each of the season’s episodes. Maryam Tanwir, this season’s host, discusses these themes with our panellists, who each present what is at stake.

“Predictive policing contributes to reproducing existing patterns and diverting the focus towards, for example, property crimes and overlooking, for example, white collar crimes.”

The first episode we discussed looks at predictive policing. Predictive policing or predicting crime is not new, in the sense that society and law enforcement have tried to prevent criminal activities for centuries. But today, predictive policing entails leveraging techniques from statistics and machine learning to predict future criminal activity. Data and past criminal activity is used to train algorithms to essentially identify patterns, either hot zones for crime or individuals of interest. So the goal of predictive policing is to prevent crime and better allocate police resources to areas of interest with the idea that technology may help make the policing process fairer and more neutral, bypassing the heuristic bias of the individual police officer. There are a number of human rights issues with predictive policing as it functions today. The kind of data fed into the algorithm is not necessarily neutral, but reflects the past bias of recorded crime in any police registry. Thereby, predictive policing contributes to reproducing existing patterns and diverting the focus towards, for example, property crimes and overlooking offences such as white collar crimes. And this has led to over policing in disproportionate targeting of vulnerable populations, which has serious human rights implications and has led to massive protests. An example is that in early November 2021, the LAPD was forced to discontinue its use of the PredPol software following public outcry. In this episode of Declarations, we will be speaking to human rights experts and academics on the human rights implications of this emergent technology. What happens to the presumption of innocence in predictive policing? How can we secure the right not to be arbitrarily detained or targeted? How do we ensure equality before the law? What does it mean to prevent a crime before it has even been committed?

“The questions of who controls this data, how secure it is and how hard it is for it to be hacked into by various actors are of utmost importance

We then moved on to a preview of our episode looking at the collection of biometric data on refugees and delving into the case of the Rohingyas in Myanmar. The starting point is that in June, Human Rights Watch released a report stating that the UNHCR improperly collected Rohingya data and shared it with the Myanmar government. This spurred a wide debate about the way in which Rohingya data have been collected, and more generally about how biometric data are collected from refugees. The UN defends these practices as a more dignified way of registering refugees, one that is more secure and efficient to guard against potential fraud and double registration and that appeases concerns about national security that many donor countries have expressed. This is problematic from a human rights perspective. The questions of who controls this data, how secure it is and how secure it is from hacking by various actors are of utmost importance, as is the question of consent and power relations between aid agencies and the refugees. How much can refugees really give informed consent if they don’t know where their data is going? This is happening in lots of different places around the world, in Afghanistan, Kenya, Somalia, Syria, and Yemen as well as Bangladesh.

“These games are caveated by the fact that you can just switch your phone off at any time and tap out of the danger, which is something that is not possible if you’re a refugee”

The next episode we discussed will examine the question of video games which simulate a first-person perspective in refugee camps. Can these be an effective way of raising awareness about this experience and building empathy? Some of them use virtual reality, and radically put the player in the shoes of the refugee. There are games like one named “Bury me my love”, where you are inserted straight into the phone of an anxious husband, as he guides his wife, Nora, from Syria to Europe, modelled off of the published texts of a woman doing the same journey. Some of the other games use VR to give the player a real first-person perspective, and others let you play as an avatar, making life and death decisions throughout the camps. While the idea of these games is to educate people about the migrant experience, the dangerous phase, and the emotions felt, force us to ask how effective this really is at changing perceptions. They could be a fantastic education tool, but we have to ask whether this is not trivializing the refugee experience. These games are caveated by the fact that players can just switch their phones off at any time and tap out of the danger, which is not possible for refugees. In this light, can they really simulate what it would be like to feel the emotions of a refugee? Games are the largest form of media consumed at the moment and need to be seriously considered for their potential benefits like so many of these other technological solutions to human rights issues. It’s far more complicated than a black or white answer. 

“Since the turn of the century, migration has increasingly been cast as a security issue, rather than a human or social issue, with borders themselves becoming geopolitical zones of conflict.”

Following that, we moved to a preview of our episode on the securitization of the EU’s borders. Since the turn of the century, migration has increasingly been cast as a security issue, rather than a human or social issue, with borders themselves becoming geopolitical zones of conflict. What some call ‘Fortress Europe’ is a product of decades of investment in the securitization and militarization of Europe’s borders, whose operations are reinforcing the construction of the ‘safe’, internal space of Europe and an ‘unsafe’ external space, institutionalizing reactive suspicion to migrants and asylum seekers rather than humanitarian responsibility. This episode will ask about the relationship between such techno-solutionism and the prevalent discourses of violence and threats that surround migration into Europe. Are they entwined? Does one cause the other? Or are they simply coincidental in a digitalising world? What help or hindrance can the machine’s perspective bring to such a deeply human issue? We will be looking the legality and nuances of this technological development, including its potential challenge to Article Six of the European Convention on Human Rights, the right to a fair trial. An interesting case in this respect is currently before the European Court of Justice concerning video lie detectors being used on migrants crossing into Greece, Latvia and Hungary, which scan facial expressions as they respond to questions to see if migrants are ‘lying‘. We anticipate a result within the next few days of recording, something that will be interesting to return to. With the increasing automation of the border, more and more decisions – decisions on which someone’s life, health and security hinge – are being displaced from the human to the machine.

The main question we will aim to unpack in our discussion is whether live facial recognition is the path to a surveillance state, or whether it could be reconciled with human rights standard.”

The next episode on our agenda focuses on live facial recognition, a widely debated topic in the past years, both in the UK as well as internationally. Several organizations advocate against the use of this technology based on Article Eight of the Human Rights Act, which aims to protect the right to private life. Academic research on the topic takes a different approach by looking at both the advantages and the disadvantages of this technology in various contexts and focusing more on the public attitudes towards facial recognition. It aims to ask why citizens across countries have different views of how or whether this technology should be used. In short, the main question we will aim to unpack in our discussion is whether live facial recognition is the path to a surveillance state, or whether it could be reconciled with human rights standards. To explore this topic, we hope to bring a wide range of perspectives on the current use of like facial recognition by various institutions, both public and private. We will also ask ourselves which actors should have access to individual’s facial recognition biometric data – should it be the government or the police for security reasons? Could this be extended to private companies under any circumstances? We also seek to find out how much of a say should the public have on the use of this technology and whether or not they are sufficiently informed about it at the moment. Finally, and perhaps most importantly, what should our aims be regarding live facial recognition in the future? Is there a way to deploy it in a human rights compliant manner, or should it be abolished completely? 

Some American estimates say AI could displace a quarter of all jobs.

We then begin to explore a frequently discussed and contested aspect of artificial intelligence: its relationship with employment and how it is already and could continue to cause mass redundancies in many fields, which we will look at from a human rights perspective. Some American estimates say AI could displace a quarter of all jobs. While it will certainly create new jobs, its overall effect is still unclear: what is certain is that there will be a great shift in the job landscape. We will be considering whether human rights might be fundamental in the future, as we reconcile the progress of AI with the protection of employment, careers and workers. This topic brings up a lot of interesting issues, the answers of which aren’t really clear at all. One key issue is whether there is a human right to work in the first place, and whether AI replacing jobs on potentially a very wide scale undermines this right or breaches it. Do current international human rights instruments cater to this situation? If there is no such right, should there be? Even if we can say there is a relevant human right, what can governments across the world be expected to do to uphold this right? How do they protect jobs? Can we hope the progress of AI to protect workers? In a way there is a fundamental tension between balancing technological advances and the benefits they can bring with their impact on certain groups in society. 

We are going to be exploring this topic not just through an academic point of view, but also through on-the-ground experience, thinking about how women can protect themselves and the often-exploitative nature of the industry.

The conversation then moved onto our upcoming episode on deep fakes. Deep fakes are videos in which the face of the actor is swapped for another face. The person manufacturing the video is then able to control your facial expressions and what you do, which often results in those affected performing actions to which they have not consented. Deep fakes have gained a lot of popularity in recent years: during the 2020 elections we saw fake videos of Donald Trump saying outrageous things, or Mark Zuckerberg making some unsavoury comments. But what becomes extremely problematic is when we follow where the money goes, which isn’t to politics, but to the adult entertainment industry, and particularly the porn industry. What we’ve noticed is that research shows that 90 to 95% of deep fake videos online are actually non-consensual porn, and 90% of that is actually non-consensual porn involving women – a horrifying statistic. In this episode, we are going to be exploring this topic not just through an academic point of view, but also through on-the-ground experience, thinking about how women can protect themselves and the often-exploitative nature of the industry. This issue is especially important because since 2015 the UK has made revenge porn illegal, but current legislation does not encompass new technology such as deep fakes, leading the UK Law Commission to start a review process of the law.

The final episode we discussed will look at internet shutdowns in Pakistan. We will be speaking with Pakistani activists who are moving the needle, creating awareness about human rights and human rights violations.

The entire podcast team is looking forward to discussing these fascinating topics with our panellists and their guests. Stay tuned! 

Posted on

Kathleen Schwind: Water Security and How to ‘Ignite Your Story’

In our final episode of the season we are delighted to be joined by Kathleen Schwind. A 2015 Coca-Cola Scholar, Kathleen focusses her research on the issues of water security in the Middle East and North Africa. She has studied at MIT and the University of Cambridge and joins our host, Muna Gasim, to discuss the problem of water shortage and its interaction with politics and international relations, as well giving advice on how to find your passion and make a positive change at any level. An insightful and inspiring conversation, this episode offers a microcosm for what Declarations has sought to achieve over the course of this season: shedding light on pressing problems in our world today and, through our guests, offering guidance on how to solve them. 

Growing up in rural California, Kathleen quickly became aware of the problem of water scarcity and the extent to which it could divide communities. She remembers her high school days where farmers, residents and senior local officials would argue and debate access to water. It is this that captured her attention and represents the foundations of her recent and ongoing research into the issues around water in the Israeli-Palestine conflict. The Joint Water Committee, formed as part of the 1995 Oslo Accords, was intended to be a temporary measure but quickly became one of permanent significance, with the reliance on political cooperation for continuous and safe water supplies in the region ensuring water cannot be forgotten when analysing the ongoing conflict. How the committee should be restructured and operate formed to the bulk of Kathleen’s research whilst she was at MIT but, as she and her childhood experiences inform us, issues of water are not confined only to areas on ongoing conflict, impacting the everyday life of people across the globe and from all walks of life. 

‘Water is a very political issue whether you like it or not’ 

Kathleen Schwind

In the midst of the ongoing Covid-19 pandemic, water scarcity has only grown in significance. Across much of the world the message has been to wash your hands regularly and thoroughly, raising the question: ‘what about those who do not have access to fresh water?’. It is in this current climate that Kathleen has seen an increase in the number of small organisations, local communities and entrepreneurs seeking to take the initiative and bring change about themselves. Bridging divides, such as those between Israelis and Palestinians, these people have partnered with their neighbours to try and make a positive impact. Not only demonstrating the pressing nature of water shortages, these projects and ambitions also exemplify the benefits of finding your passion and seeking to act upon it. 

It is at this point in the episode that Muna turns to discuss Kathleen’s scholarship. Growing up in a rural community where there were few opportunities for young people who were not blessed with athletic talent, Kathleen decided she wanted to change this. Launching the Gifted And Talented Educational Olympics (GATE Olympics) when she was in 4th grade represented an opportunity for children to show off their problem-solving and intellectual talents. Kathleen was later offered the role of a Coca-Cola Scholar, reflecting the positive impact she had had on her community, offering a chance for both competition and recognition to young people who previously been celebrated to that degree. 

The initiative and ambition Kathleen showed in creating the GATE Olympics is the focus of her new book ‘Ignite Your Story’. Recounting the lives of other Coca-Cola Scholars she has encountered, their passions and actions are shown to have improved the world around them. This not only heralds their achievements, but also offers the reader examples of how to make positive change. Details of the book and where to purchase it can be found below. 

Links to further information:
www.igniteyourstory.com 
https://www.igniteyourstory.com/our-story  

Posted on

Season 5 Episode 13 – Foro Penal & Macro/Micro-Resistance in Venezuela, featuring Alfredo Romero

For this week’s episode, host Muna Gasim and panelist Eddie Kembery speak to Alfredo Romero, one of the founding members of Foro Penal, a human rights organization that won the 2017 Robert F. Kennedy Human Rights Award for its work in Venezuela. Beginning with Alfredo’s own story, this episode is a masterclass in grassroots activism as we explore what has driven Foro Penal’s growth from four lawyer’s pro-bono work to an organisation of over 7000 activists. On the way, we discuss the difference between macro and micro resistance, activism without sacrifices, and Alfredo’s unconventional use of music.

Alfredo begins his story by speaking about the death of Jesus Mohammed on April 11th 2002 during the protests against Hugo Chávez which left 300 people injured. Alfredo’s effort to assist the family of the young boy pro-bono was one of the first actions he took against repression. He says he never thought of himself as a human rights activist – he had studied banking law – but as he has kept helping more and more families, and recruiting and educating more volunteers to assist him, Foro Penal has steadily grown.

“One woman, three years in jail without a sentence, her trial never ends… she was pregnant, and tortured… no one knows what happened to the baby”

Alfredo Romero

He then takes us through the range of actions Foro Penal volunteers are encouraged to take, formalised in his Legal Litigations Manual. The main emphasis is on taking direct local actions, including going to courts, raising attention of opinion makers, trade unions or local communities in order to precipitate a release. As he points out, the judicial system is only one of multiple systems they leverage to get a victim released. Next, he will often try and encourage international support – he suggests Foro Penal is the leading Venezeualan NGO in terms of leveraging international attention. Underlining this are “communicative actions”: posts on social media, press conferences and traditional media, once more organised by a colossal network of activists. Finally, Foro Penal will occasionally stage non-violent protests as a way of increasing the political cost of the repression. Later, we return to the topic, and Alfredo summarises the effect of having a clear formula with a drawing that captures how it streamlines decision making and avoids the necessity of extended experimentation:

“Concerts in the streets of Caracas, we play on the streets, music … And we start talking about situations

Alfredo Romero

Alfredo talks about one example of staging non-violent social events. In Caracas, for example, the stage street concerts, where people will gather and speak about community issues as well as human rights. Alfredo will often compose songs that specifically address relevant issues. This reflects Alfredo’s own personality, as both a certified lawyer at the international court and musician who plays the guitar and sings.

“Before being a musician I’m a human being, but before being a lawyer I’m a musician”

Alfredo Romero

We then talk about the viability of Alfredo’s strategy at decreasing large scale repression. Obviously Foro Penal has released many people, but why are they still being put in jail? Alfredo calls it the “revolving door effect” – for each person that comes out, another goes in. For Alfredo, taking a stand against this micro-repression is enough, because little achievements stack up, and often those released or effected by the activism become supporters of Foro Penal’s efforts, and in time become a macro-problem for the government. What will happen in the next five years? Alfredo isn’t sure – he says that he has always been expecting liberation, it’s a necessary part of the job – but he is hopeful that Foro Penal’s network will continue to grow and give hope to the unlawfully detained.

“We haven’t stopped the macro-repression – as I mentioned, repression has increased – but be have made progress on the micro”

Alfredo Romero

We talk about the universal applicability of the Foro Penal model. Alfredo has written about the models of repression (The Repression Clock) and Foro Penal operates with a clearly defined formal system. Could this work everywhere? Alfredo thinks so. He thinks all regimes go through the same stages – appeasement, awakening, hopeful and darkening – and outlines what those mean to him in more detail. For him, Venezuela is in an “appeasement” phase – and is about to wake up.

“They don’t care about what ideology they have, they care about controlling power”

Alfredo Romero

Finally, we return to Alfredo’s personal journey. Alfredo speaks of “”the embrace of freedom” – liberation is an amazing feeling, but it is also an amazing feel to liberate someone else. “There are many people around the world who are looking for this satisfaction”, so that asking them is a gift, rather than a burden. That is what he means by activism without sacrifice.

Who ever wants to become a billionaire, do not become a human rights activist. But there is something more valuable about being a lawyer, which is the satisfaction of helping someone.

Alfredo Romero

Political Context

In April 2002, Chávez was briefly ousted from power in the 2002 Venezuelan coup d’état attempt following actions by some of the military and media and demonstrations by the minority opposition, but he was returned to power after two days as a result of demonstrations by the majority of the public and actions by most of the military. However, political unrest continued during his term including a national strike that lasted more than two months in December 2002 – February 2003. He was elected for another term in December 2006 and in 2009 called for a referendum to remove term limits for all elected officials. Re-elected in 2012, he died in office in early 2013. He was succeeded by Nicolás Maduro (initially as interim president before narrowly winning the 2013 presidential elections). A combination of policy and oil price collapse caused a recession in 2014, and economic conditions continued to deteriorate in 2016. Maduro’s push to ban potential opposition presidential candidate Henrique Capriles from politics in 2017 also escalated protests.

On 20 May 2018, President Nicolás Maduro won the presidential election amidst allegations of massive irregularities by his main rivals. His inauguration resulted in widespread condemnation; provoking the National Assembly to invoke a state of emergency and some nations to remove their embassies from Venezuela. On 23 January 2019, the president of the National Assembly, Juan Guaidó, was declared the interim president by that body, and recognized as the legitimate president by several nations, including the United States and the Lima Group. About 60 countries recognised him as acting president, but support for Guaidó has declined since a failed military uprising attempt in April 2019.

Today’s Guest

Alfredo Romero is the executive director of Foro Penal, a Venezuelan human rights organization composed of more than 100 well-known lawyers and a group of over 5.000 human rights activists who provide legal assistance to victims of arbitrary detentions in Venezuela, as well as assisting the families and victims of oppression.

Alfredo graduated as an Attorney in Caracas before obtaining a masters in Latin American Studies from Georgetown and another in Law from LSE. He went on to work as a professional lawyer, before starting humanitarian efforts in 2002. Since then, Foro Penal has helped over 10,000 people, and Alfredo recieved the Orden Bicentenaria del Colegio de Abogados in 2014, the highest recognition given by this entity in Venezuela, as well as the Robert Kennedy award in 2017.

Foro Penal’s website can be accessed here.

And Alfredo’s book, The Repression Clock, published by the Wilson Centre, can be accessed for free online here.

Panelist’s Comment

In a country that has been failed by multiple decades of political leadership, his seemingly modest focus on emotional resonance, story-telling and community cohesion (over, say, political signalling or insistent street protests) is deceptively powerful and something that traditional journalism might fail to capture because it isn’t as fast-moving or flashy as rioting or grand pronouncements. At the same time, Alfredo was unusually aware of the government’s reasons for repression. Although he generalises about tyranny, the Venezuelan government aren’t monsters – they are acting rationally and effectively – and his balancing of emotional story with appropriate utilitarianism (ultimately “to increase the political cost of repression”) shows that Foro Penal can act with the head, as well as the heart.

– Eddie Kembery

Posted on

Season 5 Episode 12 – Reporting on Human Rights in Yemen with Afrah Nasser

This week, host Muna Gasim and panellist Akshata Kapoor welcome journalist Afrah Nasser for an in-depth discussion of human rights reporting, bias, gender inequity, and more in Yemen and the international community at large. Our discussion this week covers topics ranging from the role of objectivity in human rights reporting to both the benefits and pitfalls of technology and social media. Nasser shares insights with Muna and Akshata on finding role models and the most important ways that governments and residents alike can support Yemeni rights.

In 2011, there were civilian uprisings in Yemen alongside other Middle Eastern countries during the Arab Spring. In September 2014, the Houthi rebel group, in alliance with former President Saleh, ousted President Hadi and started a full-fledged war. In 2015, Saudi Arabia and the UAE with a coalition of Arab countries started a military campaign to reinstate President Hadi. Governments of Western countries continue to supply arms to the Saudi coalition that has been conducting relentless airstrikes in Yemen, affecting large swaths of civilian infrastructure and the population. Six years later, there seems to be no end in sight to the war in Yemen. 

According to the Yemen Data Project, since March 2015 there have been 18,569 civilian casualties and 22,701 air strikes. Thousands died in 2017 due to an outbreak of cholera and a breakdown of the healthcare system, which has yet to recover. A starving population is denied access to aid due to restrictions imposed by the Houthis. Women, political dissidents, and journalists are victims of arbitrary punishments. How does one report on such a conflict where so many different parties are complicit in the violation of human rights? What standards do you hold different parties to, and to what extent is it even possible to hold parties accountable? 

From humble beginnings in Yemen to an early career in journalism and the role of a blogger in Yemen’s 2011 uprising, former Yemeni journalist, political writer, and human rights defender Afrah Nasser has been advocating for women’s empowerment and human rights in Yemen for over a decade. Nasser has written for and made appearances on numerous news outlets, including Al-Jazeera, The Monitor, Atlantic Council, Carnegie Endowment for International Peace, and others. She is the recipient of the Swedish Peace and Arbitration Society Organization’s 2017 Eldh-Ekblads Peace Prize, the Pennskaft Prize in 2016, the Swedish Publicists Club’s 2014 Dawit Issak Prize, and the Committee to Protect Journalists’ International Press Freedom Award in 2017. In 2013, Nasser was named by BBC as one of the “100 Women Who Changed the World,” and has been featured three times as one of the 100 most influential Arabs by Arabian Business Magazine. Her blog, created during Yemen’s 2011 uprisings, has won her the recognition of CNN and Al-Monitor as one of the most influential blogs in the Middle East for her coverage of human rights. Today, she works as the Yemen researcher at Human Rights Watch, investigating humanitarian law violations and human rights abuses in Yemen.

“I think the question is, … what is your bias? Are your biases towards civilians? Towards human rights? Towards the integrity … the need for people to live in dignity, and, you know, for justice to be served? That’s my bias.”

Afrah Nasser

Our conversation begins with a discussion of the role of objectivity in journalism. Nasser shares that an emphasis on objectivity should not eclipse the humanity of the people in Yemen. Even those who believe themselves to be perfectly impartial, as academics often strive to be, are still likely to carry an implicit set of beliefs and biases which can skew data and information. To account for this, Nasser emphasizes the need for diversity of background and perspective – academics, researchers, human rights activists, witnesses, and other key stakeholders should come together at the same table.

“It’s really about having all these perspectives included. Because excluding local voices really harms what you’re trying to do.”

Afrah Nasser

Nasser also shares her experiences as a female journalist working in a male dominated field. She observes that even when female voices are represented, they are all too often disregarded or dismissed. Years of this disregard can culminate in imposter syndrome, or the belief that one does not deserve the position they have accomplished – when a woman is shown over and over that her opinions are not valued, this lowered esteem can become internalized. To help bolster confidence in women who are pursuing journalism – or any career – Nasser encourages finding and researching role models who have helped pave the way for the next generation to follow.

“It’s thanks to my mother actually, who taught me that your gender should mean nothing. It’s really about you, and your personality, and your hard work that determines what you want to be in the society.”

Afrah Nasser

With regard to the role of technology in sharing information, Nasser notes the clear benefits of heightened communication and access to information. The #MeToo movement in particular, she says, showed the power that women can wield when coming together to occupy new spaces and support one another. However, she is careful to raise the point that men and women encounter the online sphere in very different ways – while men and women alike receive negative commentary from adversaries, Nasser reflects on the trolling, sexual harassment, and hate speech, which combine to form what she calls “hate poetry,” which is directed disproportionately at women online. Governments and regulating bodies have a responsibility to end digital violence and make online presence safe for all.  

“Very often I live with that trauma, that my opinions don’t matter. And every time I was getting the awards I was like, really? Are they sure? Is my work this important? But I always knew I was so passionate about writing. Like I could physically get sick if I don’t write, if I don’t express the things that I was seeing, or just doing proper journalism.”

Afrah Nasser

Likewise, the rise of citizen journalism has helped grassroots movements and human rights defenders make great strides in understanding and fighting against the abuses taking place worldwide. Simultaneously, oppressive governments are able to weaponize digital platforms to target dissidents and protestors, and further restrict free expression. In many countries, journalists and activists feel as though it is just a matter of time before it is “their turn” to be arrested for speaking out in criticism of the oppressive state. Part of the responsibility for correcting this falls on the shoulders of Western states and diplomats, who have the ability to pressure governments to respect the rights of their people.

“Diplomats should use their freedom of expression to support the oppressed.”

Afrah Nasser

Nasser concludes by encouraging all listeners and supporters to show solidarity by uplifting the voices and experiences of Yemenis.

“As a principle, if you really want to show solidarity for any Yemeni just amplify their voices. it’s not about you, it’s not about hijacking their struggle, just amplify Yemeni voices.”

Afrah Nasser

Learn more:

Read Afrah Nasser’s bio on Human Rights Watch

Follow Afrah Nasser on Twitter

Human Rights Watch Articles about the Yemeni Crisis:

International Federation of Journalists: Yemen: Journalists continue facing harsh conditions

Posted on

Season 5 Episode 11 – Counterterrorism & Human Rights in Conversation with Tom Parker

This week, host Muna Gasim welcomes guest Tom Parker, counterterrorism practitioner and former UN war crimes investigator, for a discussion of situating the fight against terrorism within a human rights framework. They discuss the power of language, the use of force, PEACE method interrogation, Guantanamo Bay, the state of policing, and more. To read Tom’s latest book, “Avoiding the Terrorist  Trap: Why  Respecting Human Rights is the Key to Defeating Terrorism,” Tom Parker. Click HERE to claim a 55% discount on the Hardback and a 30% discount of the eBook – be sure to use offer code P995PARKERHC for the Hardback and P995PARKEREB for the eBook!

Tom Parker is the author of “Avoiding the Terrorist Trap: Why Respecting Human Rights is the Key to Defeating Terrorism”(2019). Until recently he was Chief of Party of a European Union project providing assistance to the Office of the National Security Adviser in Baghdad, Iraq. Tom has previously served as an adviser on human rights and counter-terrorism to United Nations Counter-Terrorism Implementation Task Force (CTITF), as the Policy Director for Terrorism, Counterterrorism and Human Rights for Amnesty International USA, as a war crimes investigator for the United Nations International Criminal Tribunal for the former Yugoslavia (ICTY) working in the field in Bosnia and Kosovo, and as an Intelligence Officer in the British Security Service (MI5). As an independent consultant he has worked on transitional justice and security sector reform projects on four continents, and was one of the principal authors of the UN’s Preventing Violent Extremism Plan of Action.

After beginning his career as a self-described hard-charging counterterrorism officer, Tom’s focus shifted to research as he sought to better understand the complex role of Western powers in the project of counterterrorism and the project of protecting human rights. Muna and Parker discuss the political potency of language, particularly when it comes to defining labels such as “prisoner of war,” which not only carries legal ramifications but also affords legitimacy to the states in question.  

“In fact, if we really wanted to start digging into solutions to terrorism, we really have to turn the lens back on ourselves and understand the role that we were playing in this dynamic.”

Tom Parker

Tom and Muna also discuss the responsibility incumbent upon global powers such as the US and the UK to hold a high standard when it comes to the use of force. While Parker acknowledges that it is often unrealistic to expect a government not to act in the face of an imminent threat to its citizens, he underscores that the use of force should always be calibrated to the lowest necessary level. By conducting military operations which resulted in civilian casualty, the “soft power” long held by the United States as a global leader, promoter of liberal values, and human rights defender has begun to erode. Expanding on this, Parker explains how, particularly in law enforcement and interrogation, practicing the “PEACE method,” which protects the human rights of detainees, is not only the ethical choice, but the smart one. Humane interrogation practices are shown to actually be more effective at eliciting information than torture – which, Tom notes, is not only illegal, but is a universal crime, punishable worldwide, without statute of limitations.

“If you’re employing the right people, they should have the creative tools and the experience and the knowledge to find legal ways to achieve their objective. It really isn’t actually that difficult. And you should be challenged as a representative of the state to hold yourself to a higher standard, and you should be challenged to do your best work every day. So I don’t find this a particularly remarkable standard to impose on people working in counterterrorism.”

Tom Parker

Looking ahead to the future of human rights, Parker cautions that without significant attention to human rights protections, all of the components of a dystopian fantasy could come together into a reality. The ubiquity of facial recognition technology and surveillance hold tremendous and concerning potential for future human rights abuses – and this future may not be as far off as we would think.

 “It’s not hard to imagine a dystopia where everything you say is recorded, everywhere you go is recorded, everybody you meet is recorded and your space to be a private, free individual disappears. Now that’s, as I said, that’s something of a dystopian fantasy of the moment, but the tools to make that dystopian fantasy a reality do exist and they’re getting more and more powerful every year.”

Tom Parker

In parting, Parker urges listeners to hold tight to the essential value of human rights protections. Human rights, he says, are not just idealistic – they are profoundly central human values, which must be defended persistently. As nations, the practice and protection of these values is a challenge that must be met in every possible scenario, without compromise.

“ Infrastructure is pretty easy to rebuild. It’s actually really, really hard to recover your values once they start getting tarnished. Because hypocrisy is kryptonite to legitimacy.”

Tom Parker

LEARN MORE

“Avoiding the Terrorist  Trap:  Why  Respecting Human Rights is the Key to Defeating Terrorism,” Tom Parker. Click HERE to claim a 55% discount on the Hardback and a 30% discount of the eBook – be sure to use offer code P995PARKERHC for the Hardback and P995PARKEREB for the eBook!

Fighting an Antaean Enemy: How Democratic States Unintentionally Sustain the Terrorist Movements They Oppose,” Tom Parker.

Acting Ethically in the Shadows: Intelligence Gathering and Human Rights,” Richard Barrett and Tom Parker.

The Four Horsemen of Terrorism: It’s Not Waves, It’s Strains,” Tom Parker and Nick Sitter.

Posted on

Season 5 Episode 10 – Thai Protests & the Fate of the Future Forward Party

This week, host Muna Gasim and panellist Neema Jayasinghe speak with Chamnan Chanruang from the Future Forward party about the anti-monarchy protests ongoing in Thailand. Chanruang is also a former Political Science and Law lecturer at Chiang Mai University, and has a professional background as a human rights activist. He has taken a stand against coup d’états and was also a key driver in the movement to finalise the draft act for the Chiang Mai Self-Governing. He was previously appointed as the Chairperson of the Amnesty International Thailand Board.

I can say we have no freedom of speech, no freedom of assembly, especially related to monarchy or related to the institutions.

Chamnan Chanruang

In 2020, anti-government protests erupted in Thailand after courts banned the Future Forward Party, the country’s most vocal party opposing the government of former junta leader Prayut Chan-ocha. Due to the coronavirus, protests saw a brief pause, but the movement resumed in mid-July. Protestors were pushing for Prayut’s removal, a new constitution, and an end to the harassment of activists. Some protesters went further with a list of ten demands to reform the monarchy – demands that were cheered by tens of thousands of people at a demonstration in September. Currently, nearly one year after emergency decree, more than 380 protesters (including 13 children) face criminal charges and alleged protest leaders remain in detention. 61 people face charges for defamatory comments about the monarchy and more large-scale protests are expected to be ongoing alongside the possibility of a charter rewrite with two referendums.

Many people committed suicide, they have no money, no food. This never happened before.

Chamnan Chanruang

Chanruang explains that power in Thailand is influenced by three main forces: businesses, politicians, and the monarchy, which wields military support. Due to rampant economic inequality, the Future Forward Party found vast support amongst the younger generations living in Thailand. The current protests differ from those in the past because of the specific focus on the Thai monarchal power structure – for example, it had long been customary for audience members in Thai cinemas to stand for the royal anthem before each show, but protestors have remained seated in protest.

In [the] long run they cannot, they cannot destroy… the demonstrations of the young generations.

Chamnan Chanruang

At the core of the unrest, Chanruang shares, is widespread economic inequality. Facing a lack of business opportunity in the face of monopolies, saddled with student debt, and without employment or income, Thailand’s younger generations are seeking reform. But the risk of persecution for dissent is high, and the criminal justice system remains intertwined with the interests of the ruling monarchy. Even from abroad, Chanruang says, the international community has an important role to play in putting pressure on the Thai government to respect and uphold human rights. This episode also features discussions of the interplay of regional politics, coronavirus vaccine equity, and the road ahead for the FFP.

People will win, but it takes time.

Chamnan Chanruang

Learn More

Chamnan Chanruang: Future Forward Party Biography

Read: Thailand protests: Why are Thai people protesting and what is the significance?

Posted on

Season 5 Episode 9 – Existential Risk, Climate Crisis & Indigenous Rights with Natalie Jones

For this week’s episode, host Muna Gasim and panellist Eddie Kemberry are joined by Natalie Jones, Research Associate at the Centre for the Study of Existential Risk, to discuss existential risk, the climate crisis, indigenous rights, and the ways that all three intersect. Natalie shares insights into the nature of global, existential risks and how we can think ahead to protect the rights of future generations. We also discuss the need for substantial and meaningful representation of indigenous peoples in decision- and policy-making.

Natalie Jones works on how global injustice and inequality can potentially contribute to existential risk, with a particular interest in climate change. Her PhD work focused on accountability and procedural justice in global governance. Her background is in international law and climate policy, including as a Staff Writer for the Earth Negotiations Bulletin at the International Institute for Sustainable Development, a Research Assistant at the Lauterpacht Centre for International Law, and a judges’ clerk at the High Court of New Zealand. She holds an LLM in international law from the University of Cambridge, and an LLB(Hons) and BSc in physics from the University of Canterbury.

The Centre for the Study of Existential Risk is an interdisciplinary organization bringing together researchers from the law, anthropology, engineering, maths, political science and more to understand existential risk (threats that are global in nature), how they are caused, and how to mitigate them. Existential risks include biological threats such as pandemics, risks stemming from developments in technology and AI, and perhaps most notably, the climate crisis. Each of these threats bears immense implications for human rights.

“Who is being heard in these conversations, and how? And under what conditions? And does participation translate into influence or power over outcomes? And if not, how can it do so?”

Natalie Jones

Eddie, Muna, and Natalie discuss the individual and cultural bias in favor of the present over the future, and the difficulty of protecting the health and rights of generations which will follow ours. In the face of current, widespread threats to human rights, there is a risk of postponing climate mitigation discussions, since the effects of climate change can seem far off. Natalie stresses that response to current human rights abuses and response to the climate disaster are not mutually exclusive – in fact, many programs, like the Green New Deal, understand the deep connectivity between capitalism, human rights abuses, and environmental exploitation, and seek to remediate social and economic inequality hand in hand with offering climate solutions. Abuse of the environment is inextricable from the exploitative economic systems which favor short term capital gain to long term communal investment. The climate crisis does not have a singular solution, nor do the complex economic, political, and cultural conditions which have given it rise.

“If you’re hearing the voices of the communities that are going to be affected by these policies, and if you know how they’re going to affect these communities, then … it’s a lot easier to actually get it right and to make climate policy that works for both communities that are really at the front lines and … reducing emissions and promoting climate action overall.”

Natalie Jones

When governments and international organizations are making critical decisions about climate mitigation and response, it is essential that the right people are not only being heard, but also having their perspectives honored and translated into actual action. Decision-making bodies are all too often at risk of perpetuating undue harm in vulnerable communities in the name of environmentally minded policies – as Natalie explains, these harms can be prevented by committing to meaningful participation and collaboration with stakeholders. Slowly, the global community is waking up to the urgent need to protect and expand indigenous rights. The UN Declaration on the Rights of Indigenous Peoples promoted this agenda, and recent years have seen an increase in protests, activism, and public outcry, including the Keystone XL Pipeline protests. While advances have been made by some governments and international organizations to include indigenous people at decision-making tables, this progress has been conspicuously lacking in areas like investment and trade, areas which would confer autonomy and lasting control.

“A sort of example here is the policies which have been called the Green New Deal type of policies, which are really aimed at combating inequality at the same time as combating the climate crisis. It’s about conceptualizing these two things as sort of both as crises and tackling them both … There’s a lot of literature out there that indicates that it can be done. That it’s not you know, human rights or the environment. It’s not like, prosperity or the environment. It’s really both, at the same time.”

Natalie Jones

The study of existential risk is a burgeoning field with abundant resources for listeners who are interested in getting more involved (see below for more details). Listeners who live on indigenous lands in countries that have colonial relationships with indigenous peoples are encouraged to start locally – learn the history of the land on which you now live. Listeners who live in the UK or elsewhere in Europe are encouraged to learn more about global history, particularly the global history of colonization that might not have been taught in schools. All listeners are encouraged to pay attention to ongoing movements and protests in defense of indigenous rights, learn about the issues and what is at stake, and determine how best to support these movements.

Learn More:

Natalie Jones is on Twitter at @nataliejon_es

Centre for the Study of Existential Risk, University of Cambridge.

Future of Humanity Institute, University of Oxford.

Future of Life Institute, Boston, USA.

The Precipice, by Toby Ord.

The Economics of Biodiversity: The Dasgupta Report, 2021.

Some resources to learn more about indigenous rights and indigenous peoples’ role in combating existential risk:

An Indigenous Peoples’ History of the United States, by Roxanne Dunbar-Ortiz.

Braiding Sweetgrass: Indigenous Wisdom, Scientific Knowledge and the Teachings of Plants, by Robin Wall Kimmerer.

How to Survive an Apocalypse and Keep Dreaming, by Julian Brave NoiseCat.

As you might hear, our guest today is a person who stammers. Stammering affects up to 3% of adults in the UK. To learn more:

Stammering resources: Stamma.org

Words Fail Us by Jonty Claypole