Category Archives

45 Articles

Posted by Declarations on

Season 6 Episode 8 – Deepfakes and Non-Consensual Pornography

In 2019, the Deepfake detection platform Sensity came out with a report that identified 96% of deepfakes on the internet as pornographic, with 90% of these representing women. Deepfakes are a modern form of synthetic media created by two ‘competing’ AIs, with the goal of replicating hyper-realistic videos, images, and voices. Over the past five years, this has led to major concerns about the technology being used to spread mis- and disinformation, carry out cybercrimes, tamper with human rights evidence, and create non-consensual pornography. In this episode, the last of this season of the Declarations podcast, host Maryam Tanwir sat down with panellist Neema Jayasinghe and Henry Ajder. Henry is not only responsible for the groundbreaking Sensity report, but is also a  seasoned expert on the topic of deepfakes and synthetic media. He is currently the head of policy and partnerships at Metaphysic.AI.

Neema and Henry start with the question of definition. ‘Deepfakes,’ Henry tells us, can be defined as “AI-generated synthetic media, such as highly realistic synthetic videos, texts, etc.” There are legitimate uses of synthetic media technology, but the term ‘deepfake’ generally refers to malicious uses, such as those made for pornography. This phenomenon emerged in 2017 on Reddit on a subreddit of the same name, which was dedicated exclusively to swapping famous female faces into pornographic films. Back then, this was technically challenging; you needed a lot of skills and processing power. Today, the tools are much more accessible and even gamified – models are pre-trained, and you only need a few images. 

As it becomes more accessible people are no longer focusing as much on celebrities and moving more toward private individuals they know in daily and this has led to scaling in terms of victims.

Henry Ajder

Neema then asks what kind action can be taken to regulate deepfakes. Henry thinks the difficulty comes from the definition. If you are talking about synthetic images, regulation is an unrealistic prospect as there are so many aspects of our life that use such images: cinema, Snapchat filters, and more. So, according to Henry we should focus on malicious uses. The problem here is identifying culprits, and hoping they are in a jurisdiction where deepfakes are criminalized.

“This is truly a global issue, and countries around the world are trying to take action, but there is a question as to whether we are giving people false hopes.

Henry Ajder

Another problem is that, with technological progress, it is likely that these operations will require less and less data in the future. For instance, nudifying technology is increasingly accessible and will become widespread in the future. Henry is particularly worries about students, as they are generally tech-savvy and know how to use these tools. He is worried that young people – in particular, young women – are vulnerable.

Neema asks whether it would be good to bring these topics up in school, for instance in the context of sexual education. Henry thinks that schools are one of the places where deepfakes are most problematic, even if sometimes seen as “just fun” or “just fantasy.” As such, education on the damages could be useful. It is key to teach the younger generation that these technologies are profoundly harmful and cannot be construed as fun, even if they are not yet criminal. Henry is also deeply concerned about the way children are involved in these deepfakes, both as victims and perpetrators. 

“Making it clear that this is a form of digital sexual violence is key.

Henry Ajder

Could legitimate deepfake pornography be created – for instance, if a sex worker wanted to license their face? While an interesting question, Henry worries that the risks of misuse will always be very high, potentially obliterating any potential for legitimate use. Only though a mechanism such as biometric authentication and informed consent from all parties could such a system be safe and avoid misuse. 

Another issue is that it is basically impossible to check whether your image has been used against your will. When writing the report, Henry traced some of the videos back to their origins; after warning those involved of the malicious use that had been made of their face, he realized that most of them did not know their images were being used. If deepfakes are not used as weapons, victims generally don’t know they have been deepfaked. There is also a legal question over whether creating these fakes without sharing them should also be criminalized (Henry believes so).

“Can you build these systems in a way that avoids misuse? I typically think it would be difficult to do so.

Henry Ajder

Although the bulk of deepfakes concern women, there are also cases of men, in particular homosexual men, being targeted, especially in countries where homosexuality is banned or stigmatized. In such cases, deepfakes can literally be a question of life and death for the men whose images are used. Being pragmatic, Henry thinks one of our best bets is to push this technology to the dark corners or the Internet, and to make it clear that people who engage with it are engaging in criminal activity. 

“There was no doubt that the vast majority of these people had no idea they had been targeted.

Henry Ajder

Our panelist:

Neema considers herself to be incredibly privileged to have been able to work with those worst affected by society and governance over the years, which has fuelled her passion for Human Rights, an area in which she hopes to make a difference at both a policy and grassroots level. Neema has often found herself working in community development projects in Africa, especially Uganda and Tanzania, both in consultancy projects and NGO work. This inspired her to become the current President of the Afrinspire Cambridge Student Society and the fundraising officer for the Cambridge Hub. Years of community service led Neema to later establish her own education-based NGO in Sri Lanka. She is incredibly passionate about international development, the politics behind it and policy. It’s this that encouraged Neema to study Education, Policy and International Development at Cambridge.

Our guest:

Henry Ajder is a seasoned expert on the topic of deepfakes and synthetic media, he is currently the head of policy and partnerships at Metaphysic.AI and also co-authored the report ‘Deeptrace: The State of Deepfakes’ while at Sensity. This was the first major report published to map the landscape of deepfakes and found that the overwhelming majority are used in pornography. He is also a graduate of the University of Cambridge and is an experienced speaker, frequently presenting keynotes, panels, and private briefings. He is also an established media contributor, regularly featuring on the BBC, The New York Times, Vox, The Guardian, Wired, and The Financial Times.

Further reading

Ajder, H., Patrini, G., Cavalli, F., & Cullen, L. (2019). Deeptrace: The State of Deepfakes Landscape, Threats, and Impact (Sensity’s 2019 report)

Beres, D. (2018) Pornhub continued to host “deepfake” porn with millions of views, despite promise to ban (Mashable)

Cole, S. (2017) AI-Assisted Fake Porn Is Here and We’re All Fucked (Vice)

Gregory, S. (2021) ‘Deepfakes, misinformation and disinformation and authenticity infrastructure responses: Impacts on frontline witnessing, distant witnessing, and civic journalism. Journalism.

Harris, D. (2019). Deepfakes : False Pornography Is Here and the Law Cannot Protect You. Duke Law & Technology Review

Mirsky, Y., & Lee, W. (2021). The Creation and Detection of Deepfakes. ACM Computing Surveys.

Yadlin-Segal, A., & Oppenheim, Y. (2021). Whose dystopia is it anyway? Deepfakes and social media regulation. Convergence.

Posted by Declarations on

Season 6 Episode 7 – AI and Workers’ Rights

In this episode, host Maryam Tanwir and panelist Archit Sharma discuss the impact of technology on employment with our guests, Martin Kwan and Dee Masters. Artificial Intelligence brings many promises, but to many it is a threat as well. As AI can increasingly perform tasks at a low cost, what happens to those whose jobs are displaced by robots? And if we are using AI in the workplace to monitor our employees and make recruitment decisions, how can we ensure workers’ rights are respected and that AI decisions are subject to sufficient oversight and accountability? This area is a complicated web of issues, but our guests have the expertise to help us better understand the stakes. Dee is a leading employment barrister at Cloisters Chambers with extensive experience in the intersection of Artificial Intelligence (AI) and employment who advises companies on how to ensure their AI systems are compatible with the law and the rights of workers. Martin is a legal researcher and journalist, and the 2021 UN RAF Fellow. He has written many articles on topical human rights issues, including a fascinating recent article on automation and the international human right to work.

We begin by examining whether the right to employment exists under international law. Our conclusion is that it does, and is inscribed in the Universal Declaration of Human Rights (Article 23), as well as in the European Social Charter. States party to the International Covenant on Economic, Social and Cultural Rights have an obligation (Article 16) to report the steps they undertake to protect these human rights to the UN committee on Economic, Social and Cultural Rights. Parties submit detailed periodic reports and take these seriously, as public exchanges with the UN Committee show. Governments are already starting to monitor the changes wrought by AI and the ‘Fourth Industrial Revolution.’ The Swiss government was, for instance, a pioneer in addressing this in their periodic reports. 

The whole review process is diligent and stringent. The scrutiny provides an incentive for states to showcase their efforts and commitment to the right to work.

Martin Kwan

Archit provides some context around AI’s potential impact on the labor market. The McKinsey Global Institute estimates that, by 2030, 30% of jobs will be taken by robots, whilst the World Economic Forum claims that AI will have replaced 85 million jobs worldwide across several industries by 2025. Martin agrees that AI threatens to imperil the right to work. It is up to states to come to terms with this and implement strategies to cope with AI-induced unemployment. One potential response is to ban AI outright: for instance, India has banned autonomous cars to protect millions of jobs

“Mass redundancies can be prevented if the government is willing to and able to do so. But certain jobs are simply not savable in some countries or in some sectors.”

Martin Kwan

However, it is not always desirable to save jobs at all costs. Companies can make important gains thanks to AI and governments have an incentive to promote the use of AI to improve economic performance. Globalized competition means that consumers will shift to goods produced using AI technology, which will be more competitive in price. Martin believes that technological change cannot be completely halted, and that the practical reality may force companies and governments to favor policies that seek to use more AI and automation.

Martin believes corporations have an ethical responsibility to consider the human rights impact of their activities. It can be futile to ask them to protect jobs at all costs, but part of their ‘economic, social and governance’ (ESG) agenda could integrate workforce sustainability. It is important to convince companies that workforce and competition are not at odds: if society becomes pauperized because of mass redundancies, companies’ profits will also be slashed. Martin is rather optimistic about the potential of mass redundancy to become a priority in the business community. 

Beyond the rights of workers whose jobs are threatened by AI, what are the rights of those who will not lose their jobs, but will see their working life reconfigured by AI? Dee provides us with her insights into this question, made all the more acute by the explosion of technology in employment relations in the context of the pandemic. This has led to a boom in the amount of worker data collected and the expanded use of AI tools to determine whether jobs should be slashed. 

This raises clear issues of discrimination, as we know that AI is subject to significant biases. Indeed, AI is fundamentally about stereotypes, about creating ideal-types of characteristics considered “positive” and “negative”. If you are outside its boxes – because of your appearance, for example – you may be at a severe disadvantage. As Dee tells us, AI is not only used to decide who to employ, but also who to dismiss. 

She argues that our anti-discrimination law can deal with problems raised by AI, but that transparency is key. We see this with job adverts: if you are a woman, you will not be shown some job adverts, but in most of the cases you cannot even know you are being discriminated against. Auditing code or impact is absolutely essential to bringing transparency, but Dee would like to go beyond and see companies detail the AI tools they use and explain their functions. AI is perceived as neutral, but it can also replicate biases or even be intentionally used to reinforce them.

“There is a marketing spiel out there which is rely on AI because machines aren’t biased. That’s very attractive but when you look into it in more detail you realize that’s not always true.”

Dee Masters

Dee believes that AI can be useful in several cases, for instance to identify skills or distribute work based on these skills. We should not, however, march along that path just because it is useful. 

Another issue in which AI threatens to affect human rights is the right to privacy. Dee explains how, in the context of the pandemic and working-from-home, AI was used to detect whether employees were working “hard enough”, using cameras or keyboard detection software to observe employees at all times. This is extremely intrusive and violates the right to privacy. In the US, organizations were found to be using machine learning to assess which employees were most at risk of Covid-19 in order to decide who to lay off. 

“We’ve crossed this line in which these technologies have become normalized. It’s here to stay and it will be hard to rewind on that.

Dee Masters

Once again, the laws we have in place are sufficient, but the issue is that legislators and employers are not up to speed on how existing legislation translates to new technology. For instance, with data, the GDPR does not include statements stating explicitly that you cannot discriminate in data collection and treatment, and therefore leaves room for partial interpretations. Dee argues we need to tighten legislation and understand how data cuts across many areas of our lives. Enforcement is key, particularly avoiding the “siloing” that currently prevents these issues from being taken up in some forums. “Legal protection is meaningless if we don’t know how to apply it,” Dee tells us.

“We need to be more creative not only about these rights but also how they’re going to be enforced.

Dee Masters

The employment relationship, based on personal trust, is fundamentally challenged by management via app. We can try to mitigate some of these effects by ensuring a human is involved at key junctures, without which we risk allowing unfair and discriminatory decisions. We know that AI is making decisions about dismissal; to Dee, “this is inconsistent with legal protections in this country.”

Dee hopes that, when cases start to be adjudicated, courts will find that these dismissals were unlawful. Until then, however, it’s a “brave new world”. The law will get there, but it will take time, and this is unsatisfactory to both employees and employers. Rather than change the law, the government first and foremost needs to explain it better. 

“People are waking up to the idea that AI and algorithms are making important decisions and they’re not liking it.

Dee Masters

We need to build trust and show that this technology can be used in ways that are compliant with human rights. Dee would advise workers targeted by AI to use all legal frameworks available to them, and there are many: the right not to be unfairly dismissed, the right not to be discriminated against, and more. People may not know how they’re being unfairly treated and that there are channels for remedy. 

We also need to pay more attention to the companies higher up the value chain, which design the AI tools but are largely left off the hook today. The EU is looking at introducing obligations at every level in the value chain, a move that Dee thinks could be usefully imported to the UK.

So, if AI should not be stopped completely, there are red lines: we need to evaluate clearly the limits of acceptability. For Dee, AI should not make critical decisions about people’s lives; humans should not only review the decision, but also own it. Then – and only then – can we leverage AI for the common good.

Our panelist:

Archit is an LLM student at the University of Cambridge. He previously studied Law as an undergraduate there, and in his final year wrote a dissertation on how (and to what extent) human rights are protected in emergencies. This research was greatly influenced by the COVID-19 pandemic, and has left Archit with a desire to engage more in the future with the question of how human rights can deliver on their promises.

Our guests:

Martin Kwan is a legal researcher and legal journalist. He is a 2021 UN RAF Fellow, and also an Honorary Fellow of the University of Hong Kong’s Asian Institute of International Financial Law. He has written and published many articles in recent years on topical and complex human rights issues, and one such article concerns Automation and the International Human Right to Work.

Dee Masters is a leading employment barrister with extensive practical experience in the technology space, especially in relation to artificial intelligence and its relationship with equality law, human rights, and data protection. She set up AI Law Consultancy with Robin Allen QC, which aims to help businesses navigate rapidly changing technological arena and the legal implications of using AI. She has written much on the intersection of law and technology, including co-authoring a highly influential report last year: ‘Technology Managing People – the legal implications.’

Further reading

Martin Kwan, ‘Automation and the International Human Right to Work’ (Emory International Law Review)

Dee Masters and Robin Allen QC, ‘Technology Managing People – the legal implications’ (Cloisters Chambers)

Calum McClelland, ‘The Impact of Artificial Intelligence – Widespread Job Losses’ (iotforall)

Lili Cariou, ‘How is Artificial Intelligence Shaping The Future of Work?’ (BusinessBecause) 

Posted by Declarations on

Season 6 Episode 6 – Freedom of Expression and Internet Shutdowns in Pakistan

In this week’s episode of the Declarations podcast, host Maryam Tanwir sat down with Munizae and Sulema Jahangir to discuss freedom of expression and internet shutdowns in Pakistan, and their implications for human rights in the country. Freedom of expression, attacks on civil society groups, and a climate of fear continues to impede media coverage of abuses by both government security forces and militant groups. Media outlets have come under pressure from authorities not to criticize government institutions or the judiciary, and journalists – who face threats and attacks – have increasingly resorted to self-censorship.  In several cases in 2020, government regulatory agencies blocked cable operators and television channels that had aired critical programs. International conferences raising awareness on human rights and promoting initiatives safeguarding human rights (organized by the guests) have been mired in technology shutdowns. With our guests, we explore what’s at stake and what we can do about it.

Our guests start with some context on Pakistan, which is ranked 145th in the latest Reporters without Borders report on freedom of expression around the world. Although Pakistan’s constitution guarantees freedom of expression, in practice we observe instead state repression.

There is an atmosphere of fear, especially in the journalist community.

Sulema Jahangir

Munizae emphasises the drafting of Article 19 of the Constitution – which should guarantee freedom of speech – and the numerous exceptions it contains. The definition of such exceptions is so large that it’s possible to claim that practically anything said might violates one of them. The elephant in the room in Pakistan, our guest says, is the army – about whom one cannot say anything. When anyone talks about military intervention – such as in elections – the reports are banned. The new law under which journalists are charged with sedition (the “Pakistan Electronic Crime Act”) stems from exceptions in Article 19. Journalists, in particular, are targets of Article 19 charges, with cases blocked in the Supreme Court. Some journalists have been kidnapped and even killed for their reporting.

“We cannot talk about the biggest player in politics, and that is the military… If you do not have democracy in Pakistan, I do not think that journalists can be safe.”

Munziae Jahngir

There exists today an unofficial ban on television, and all the most popular anchors are banned. While this situation is not new, the current administration has been much more brazen toward journalists. The role of the judiciary has changed too, with decreasing independence; today it lets the state get away with an increasing number of charges pressed on the basis of the wide exceptions in Article 19. The political narrative has become very constrained, and major political parties have been banned from speaking on electronic media and even in private events. Islam is another dimension of the Article 19 carve-outs. People in Pakistan are generally very religious, and yet many people are lynched on charges of “blasphemy.” The government has used this weapon too, stoking fears and creating a climate of hatred.

“The judiciary, the army and the administration have made a coalition in curtailing freedom of speech.”

Sulema Jahangir

Munizae insists on the selectivity of the government, who does not hesitate to go around the law to help protect its allies. She tells us, for instance, how the government requested she and her team not release the interviews of the Taliban they made, while at the same time the same government was abundantly communicating on its relationship with the Taliban. This shows how the government was trying to control the narrative.

At a major conference organized conference last year by Sulema (featuring 2000 people and 160 speakers), the government shut down the internet. The conference’s closing ceremony, in which usually the opposition leader addresses the audience, was disturbed first by a shutdown of the WiFi network. The organizers had back-up internet cables, but the government realized this and called the cable operators to demand that they shut down the line – and the operators complied.

“It shows how petty they are. There are issues of hunger, schools, malnutrition and you are more concerned with cutting the internet at an event of lawyers with the chief justice in attendance. It shows you how petty the Pakistani state is.”

Sulema Jahangir

So what are the options to protect human rights, and what role does tech play? The broader question with respect to tech’s role, Sulema tells us, is one of access. In some areas, there is no reliable internet access. Language is another issue: there 82 spoken languages in Pakistan but social media is almost only used by English and Urdu speakers. Women, on average less educated, also have less access to the internet than men.

“Pakistan is a state made on national security and not welfare.

Munizae Jahangir

Our guests agree that social media is a double-edged sword: they are dominated by men. and right-wing, conservative voices, but are also increasingly used by activists as they are pushed out of national television. Many social movements have been greatly helped by social media, such as the massive Women’s March on 8 March, or the students’ march (in a country where student unions are banned). When events take massive proportions both on social media and in the streets, state-controlled media has no choice but to report it.

“Social media have given rights to people; they have democratized people, they have given a voice to victims, they have given the other side of the story. If you capture the imagination of the nation, you become a story. For Pakistan, I am so glad it is here.

Sulema Jahangir

Munizae emphasizes that social media may also aggravate divides, as many still lack access, but agrees that it remains a good alternative to tightly-controlled mainstream media. It is the only way to get alternative viewpoints across, despite numerous issues. As Maryam points out, social media have also helped the spread of violent content, especially of violence against women; social media can amplify certain misogynistic or conservative views.

So, what can we do to move the needle? It’s a plethora of issues, says Sulema, the main one being that Pakistan has been a national security state. Inequalities need to be addressed, and those privileged by power or money need to understand that others in their country do not have a fraction of what they do. Munizae says Pakistani women, students, and workers must engage in strategic collective action, which has proven to bear fruits despite the tremendous challenges.

Our panelist:

Maryam has a PhD and post-doctorate from the University of Cambridge. She has been teaching gender and development at the Centre of Development Studies for the last 5 years. She also works as a gender consultant for the World Bank and United Nations. Since the lockdown, Maryam has been branching out towards neuroscience courses, theatre acting and podcasts!

Our guests:

Sulema Jahangir is a dual qualified lawyer: she is a solicitor of the senior courts of England & Wales and an Advocate of the High Courts in Pakistan. Sulema graduated from Cambridge University in 2003. She is a partner at Dawson Cornwell. Sulema is also a board member of AGHS Legal Aid Cell, which is the oldest and one of the largest charities providing free legal aid to vulnerable people in Pakistan. Sulema practices in many cases with a human rights element including child abduction, domestic and honour-based abuse, forced marriages, female genital mutilation, bonded labour and constitutional cases. She was part of a committee behind widening the definition of domestic abuse under Practice Direction issued by the courts in England & Wales. Sulema has also assisted in advising parliamentary bodies in Pakistan in drafting laws for the protection of women. She is a regular speaker at conferences and regularly appears on television (including BBC, ITV and Pakistani media channels), the radio and in the press. She has both written and been featured in articles for newspapers (including the Sunday Times, Dawn Newspaper, the News on Sunday) and journals on legal topics in Pakistan and in the United Kingdom.

Munizae Jahangir is a broadcast journalist and documentary filmmaker, currently anchoring a flagship current affairs show on one of Pakistan’s leading media news network Aaj TV, called, ‘Spotlight with Munizae Jahangir.’ Munizae is a co-founder and Editor in Chief of Voicepk.net, a digital media platform focusing on human rights issues. Since 2004 Munizae has been anchoring and reporting for prominent news media outlets. Jahangir’s high profile interviews include Hillary Clinton, Benazir Bhutto, Nawaz Sharif, Prime Minister Imran Khan, Nobel laureate Malala Yousafzai. Munizae’s first award winning documentary, “ Search for Freedom” depicted the lives of four women caught in the war in Afghanistan. Munizae was honored as a Young Global Leader by the World Economic Forum. She is on the board of the Asma Jahangir legal aid cell which provides free legal aid to marginalized groups. Jahangir is a founding member of South Asian Women in Media, and a council member of the Human Rights Commission of Pakistan.

Further reading

Articles

Why Asma Jahangir was Pakistan’s social conscience – Moni Mohsin in The Guardian.

Pakistan: Media, Critics Under Increasing Attack – Human Rights Watch

International Forum Raises Concerns of Human Rights Violations in Pakistan and China – Business Standard

How Pakistan’s Military Manages the Media – Ayesha Siddiqa in The Wire

Pakistan Media Grows Spine; Takes on the Powerful Military – Seema Guha in Outlook

Books

Ayesha Siddiqa (2007) Military Inc: Inside Pakistan’s Military Economy.

Ayesha Jalal (1995) Democracy and authoritarianism in South Asia: a comparative and historical perspective

Posted by Declarations on

Season 6 Episode 5 – Biometrics and Refugees

In episode 5 of this season of the Declarations podcast, host Maryam Tanwir and panelist Yasar Cohen-Shah sat down with Belkis Wille, senior researcher at Human Rights Watch, and former UN official Karl Steinacker to discuss the collection of refugees’ biometric data. Last summer, Human Rights Watch reported that a database of biometric data collected by UNHCR from Rohingya refugees had been handed to Myanmar’s government – the very government from which the refugees were fleeing. This scandal has brought to a head the debates surrounding the use of refugees’ biometric data: from Yemen to Afghanistan, Somalia to Syria, biometric data is now fundamental to how aid groups interact with refugees. But how does this affect their human rights, and can it ever be used responsibly?

Belkis kicks off the episode by presenting the results of the report authored by her organization, Human Rights Watch (HRW), on the transfer of Rohingya refugees’ biometric data to the Myanmar government. The refugees’ data was collected upon their entry into Bangladesh in a registration process that was required before refugees could be granted a ‘smart ID’ and access aid and services. However, HRW was able to expose the fact that Bangladesh was sharing this biometric data with Myanmar’s government without the refugees’ informed consent, causing obvious concerns for the refugees’ safety and human rights. Disturbingly, HRW found that the United Nations High Commissioner for Human Rights (UNHCR) had in fact created the entire system by collecting data in the first place. 

“Rohingyas had no choice but to agree or lose access to services.

Belkis Wille

Karl, a former UNHCR official himself, highlights that the issue of ‘registration’ is not covered by the conventions that founded the UNHCR or created a legal framework for aid. This is a task that UNHCR took on much later. After decolonization, the Western powers had a direct interest in making sure that borders in the Global South stayed open so that refugees could find help in neighboring countries in the face of war. According to Karl, the prevailing philosophy of Western powers ran: “you put them in camps, we’ll feed them.” This is where the registration process began – initially in simple ‘paper-and-pen’ form – to organize distribution of food and supplies in refugee camps. As technology improved, this registration system became increasingly sophisticated, integrating photos and other personal details. The attacks of September 11 2001 brutally put refugee registration in the spotlight. From a niche, localized process, refugee registration became a security priority for the UN’s main donors, who pushed the UNCHR to adopt much more sophisticated methods.

“A lot of the push toward mainstreaming biometric registration comes from desire to prevent fraud.”

Belkis Wille

One reason why these tools were originally adopted is fraud prevention. Biometric registration is seen as a panacea to fraud, as it enables precise identification of refugees and avoids distributing resources to the same people using different identities. However, Belkis points out that research shows fraud is not happening at the micro level of distribution, but rather “higher up the chain.” The other issue is efficiency: agencies and organisations are under increasing pressure to provide more assistance, faster. Here again, there is no clear evidence that biometric technology has done much to improve this. This leads Belkis to think that some key donors and other organisations have jumped too swiftly to the conclusion that biometric data is the key. There are risks associated with these systems, which need to be weighed against the alleged benefits.

“Once you create these systems, you won’t be able to control what happens to them and how they’re used.”

Belkis Wille

Karl recalls how, as a young aid worker, he welcomed the arrival of biometrics. In the past, the head of household (generally a man) was identified and members of his family would depend on his registration – no individual records were kept. The way data was collected before was ‘undignified’: the police or military would enter a refugee camp, round up those living there and subject them to a long and painful registration exercise, collecting fingerprints with an inkpad. It was “almost traumatizing” even to the aid workers, not to mention the refugees who had to undergo hours or even days without being able to move. The promise of biometrics was to end this, and it did. However, it has also brought new risks: there is ample evidence that the number of aid beneficiaries plummets with biometric registration, for instance. Perhaps the problem today is an “overuse” of biometrics, Karl tells us. 

“I still think the advantages outweigh the way it was done in the analogue days.”

Karl Steinacker

Yasar points our guests to the notion of consent. In a world where biometric data is becoming so common, how can we guarantee the consent of populations who see their data collected? Belkis points out that the broad framework in the aid industry is that data capture is only possible if informed consent is provided. An official is supposed to point out why and how the data is collected. However, if an individual is fleeing armed conflict, what choice does she or he have when access to all forms of aid is conditioned on biometric registration?

“It’s hard to argue that they had a choice. Can we ever see someone in this situation making the decision without coercion?

Belkis Wille

Beyond consent, information is also key. In practice, the UNHCR fails to explain why data is collected and with whom it will be shared. The aid organizations themselves do not always know exactly what happens with the data. “Information and transparency” should, according to Belkis, become the new paradigm; “informed consent” can never be provided in these circumstances. There is also a problem with the collection of biometric information on children. In Kenya, some 40,000 Kenyan children were registered as refugees years ago and now, because of this previous registration, cannot get ID cards despite being Kenyan citizens.

UN agencies can enter into data-sharing agreements with countries, but the nature of these agreements is highly confidential. If you’re a refugee, you have no way of knowing where your data is going. In Jordan, for instance, the UN has admitted that they share refugee data with the Jordanian government, something that refugees are unaware of. This points to the power imbalances that plague the aid sector, with refugees unable to refuse that their data be shared.

“We have no idea what the UN is agreeing to share in a specific country context with the government.

Belkis Wille

Karl points out that data sharing with governments has always been part of the aid process, and does not worry about it per se. However, the situation in Bangladesh is different, as the government is sharing data with the very same state that persecuted these refugees; this is unheard of and particularly problematic. According to Karl, cases should be examined on a case-by-case basis. On aid in general, he notes there is no recourse for refugees. Comparing it with new legislation in the West, such as the GDPR, refugees are provided with little to no rights (such as the EU’s ‘right to be forgotten’). The discussion on refugees’ rights has to take place within the international community as a whole, and in particular in states whose governments fund the UNHCR. 

“The first biggest shortcoming in the aid sector is that there is neither the right of individuals to know what data is collected about them, nor is there is a right to correct. Secondly, there is no institutional pressure to make this happen.

Karl Steinacker

In terms of future trends, Belkis notes that there are today more conversations taking place than a few years ago. She finds it positive that organizations have started hiring data protection officers and paying more attention to the issue. UN agencies have also published policies. These policies are good, according to her, but their implementation is lacking. For instance, a risk assessment needs to be conducted every time the UNHCR launches a new data collection process, and all too often these are not taking place for mostly logistical reasons: there are not enough trained staff. Belkis calls on donors to take action: at the end of the day, resources are key to train these new data protection officers; donors need to realize that more funding is needed if they want to provide refugees with sufficient data protection.

“There is still a long way to go, but we are seeing organizations grappling with these issues much more seriously.”

Belkis Wille

Our panelist:

Yasar is an MPhil student in World History at the University of Cambridge. He is studying cultural pan-Africanism in Nkrumah’s Ghana in the early 1960s. He is originally from London, and previously studied History at the University of Oxford. After graduating, he hopes to work in international development, particularly with refugees.

Our guests:

Belkis Wille is a senior researcher with the Conflict and Crisis division at Human Rights Watch. Before taking up the role, Wille worked as Human Rights Watch’s senior Iraq researcher, and before that was the Kuwait, Qatar and Yemen researcher. Previously, Wille worked at the World Organisation Against Torture in Libya.

Karl Steinacker is an expert on digital identity. As a manager and diplomat of the United Nations High Commissioner for Refugees he was for several years in charge of registration, biometrics, and the digital identity of refugees. Currently he works with the International Civil Society Centre and Digital Equity on this and related digital issues.

Further reading

Articles

Biometric Data and the Taliban: What are the Risks? (The New Humanitarian)

The UN’s Refugee Data Shame (The New Humanitarian)

Head to Head: Biometrics and Aid (The New Humanitarian)

Biometric Refugee Registration: Between Benefits, Risks and Ethics (LSE International Development Blog)

Although shocking, the Rohingya biometrics scandal is not surprising and could have been prevented (ODI Insights)

Rights groups call on Greece to halt plans to collect biometric data (Info Migrants)

What’s the controversy with Ghana’s new ID card? (BBC Africa Daily podcast)

Books

Katja Lindskov Jacobsen, The Politics of Humanitarian Technology: Good Intentions, Unintended Consequences and Insecurity (2017)

Kristin Bergtora Sandvik and Katja Lindskov Jacobsen (eds.), UNHCR and the Struggle for Accountability: Technology, law and results-based management (2016)

Posted by Declarations on

Season 6 Episode 4 – Empathy Games

For Episode 4 of this season’s Declarations podcast, host Maryam Tanwir and panelist Alice Horell sit down to discuss empathy games with Dr Karen Schrier, Associate Professor and Founding Director of the Games and Emerging Media program at Marist college, and Florent Maurin, creator of The Pixel Hunt, a video games studio with a focus on reality-inspired games.

Former American President Barack Obama thinks we are suffering from an empathy deficit. According to him, we need to see the problems of our world through the eyes of others. Could the socio-political crises of our time be solved with the use of ’empathy machines’, means of radically putting oneself in another’s shoes to create a more understanding and accepting world? Many researchers and game designers are trying to achieve this through the use of first person player video games. Our conversation discusses these so-called ‘empathy machines’ and tries to understand their potential for changing the world.

We kick off the episode with a discussion of a game designed by Florent, Bury me, my love. This critically acclaimed game puts the player in an interactive narrative following Nour, a Syrian woman traveling to Europe being helped through WhatsApp by her husband Majd, who’s still in Syria. Florent, a video game designer who wanted to use video games to convey his shock at the situation of Syrian refugees in 2015, emphasizes how unnerving the very idea of designing a video game was to him at first. It is only when he came across an article composed of the messages sent via Whatsapp by a refugee named Dana to her husband in Damascus that he got the idea for a non-linear game based on her experience. After contacting Dana and obtaining her approval – which was crucial to him – he started designing Bury me, my love. The aim was not to physically recreate the journey, but rather to design a game based on the Whatsapp exchanges between the couple.

Interestingly, Florent highlights how his game remains a fiction. While Dana reviewed the game and he interviewed several other refugees to make the game as realistic as possible, the game’s character is not Dana, but Nour – a fictional character. Karen, who has conducted research on empathy games, points to the multiple research steps undertaken by Florent before designing the game. To her, these are indispensable in the design process. Storytelling is a powerful tool to sustain empathy, and these games, if well researched, are ways to tell stories. However, it is hard to measure the impact of these games on empathy, a term that is difficult to pin down in general. Karen defines empathy as “considering other people’s feelings” and compassion as the “next level”, not only recognizing the other’s feelings but also helping them.

“Empathy as a concept has been deliberated. However, is it something we can measure? And if that’s the case, does it even matter?

Karen Schrier

Alice raises the distance created by virtual reality between players and situations. Games have been criticized for collapsing a complex situation into a game. What are the ethical implications of this? Karen agrees, and says this is something researchers and designers should always keep in mind. Careful testing and input from people with a deep personal understanding of the situation are crucial to designing scripts that generate empathy. Design decisions are also key: do you create a first-person game or do you create a game with a more external perspective? Each of these questions must be approached with the objective of reducing harm and maximizing positive impact.

There’s always a challenge with [ethical considerations in designing such games]. You could do more harm than good and it’s really a fine line.

Karen Schrier

Florent explains that when he designed the game, impact was not at the forefront of his mind. He approached making the game more like a journalist, deploying a new method to tell a story he felt was important and that people should learn more about. That does not mean he is neutral – far from it – and his perspective is represented in the game. What Florent takes most pride in is the fact that some people may look at migration differently after playing his game. Empathy, to him, is “acquired over the course of a lifetime”, and if one piece of art, including a video game, cannot make someone empathetic, little by little they may build empathy over the long-term. 

Of course, my point of view appears in the game. Because – as any author – when you do something, what you produce is influenced by who you are.

Florent Maurin

Florent goes on to say that, games are designed as conversations, rather than discourses. The game designer tries to anticipate all the questions a player may ask and provide satisfying answers. Rather than a discourse written by the game designer, then, they are conversations between players and designers. This approach has drawbacks, and can lead players to become more passive, but it can also stimulate activity through interaction.

Karen brings our attention to scholarship on ‘news games,’ those “that give us some kind of perspective on current events, or issues or topics”. She always asks her students what the advantages of a game are, by comparison with traditional means of conveying news such as text articles. She reports that many of her students do not read or watch news, and games are useful ways for them to engage with current affairs in a way they might not otherwise. Karen explains that players do not just play, but “converse through play,” which can draw the younger generation in the public sphere. For her, games need to also be seen as “public spheres” in their own right, where players interact, discuss current events, and event protest.

We don’t consider our youth as part of the public sphere. Youth should be part of the public sphere, they should be part of conversation, they have as much as anybody else reason to decide how our world should be.

Karen Schrier

Indeed, Karen emphasizes how we collectively fail to see games as important forums, especially for the youth. This reflects in part a tendency to exclude youth from society, from associating youth with lack of seriousness. We should instead see games as “productive” and “impactful”, in part through telling stories about current issues. 

Florent highlights how important the game’s realism is to its impact. His game was criticized for being “unrealistic” by the far-right, as his character was a woman; according to detractors, this did not represent the reality of migration. In his case he was unfazed, as Nour is directly based on Dana’s life story. He also points to the game Path Out, an auto-biographical narrative game written by Abdullah Karam, a Syrian refugee. Karen also directs our attention to the dangers of providing different perspectives on an event or situation. For instance, a game that enables players either to play as Mexican migrants trying to cross the border into the US or as border guards trying to prevent them strikes her as highly problematic: such a game seems to claim that the situation involves two equal sides, which is far from the case – there is one marginalized and suffering, and then the privileged border control forces.

Our Panelist:

Alice is a third year Human, Social and Political sciences student at the University of Cambridge and is originally from London. Her studies are focused on the politics of conflict and peace, particularly looking at how new technologies are impacting the refugee crisis, in which she became interested when volunteering for a migrant rights charity.

Our guests:

Karen Schrier is an Associate Professor and Founding Director of the Games and Emerging Media program at Marist college, and also of Play innovation lab. 

Florent Maurin is creator of The Pixel Hunt, a video games studio with a focus on reality inspired games. He is the creator of Bury me, my love, a critically acclaimed game which puts the player in an interactive WhatsApp-like fiction following Nour, a Syrian woman traveling to Europe being helped by her husband Majd, who’s still in Syria.

Further reading

Press coverage

NPR’s Goats and Soda: ‘A Kid In A Refugee Camp Thought Video Games Fell From Heaven. Now He Makes Them.’

Bury me, my love: coverage in the Washington Post; Radical Art Review

Academic reading

Alberghini, D. (2020) Improving empathy: is virtual reality an effective approach to educating about refugees?

Farber, M. & Schrier, K. (2017) The strengths and limitations of using digital games as “empathy machines.” working paper for the UNESCO MGIEP (Mahatma Gandhi Institute of Education for Peace and Sustainable Development)

Farber, M. & Schrier, K. (2021) ‘Beyond Winning: A Situational Analysis of Two Digital Autobiographical Games’ in The International Journal of Computer Game Research 21: 4.

Mukund et al. (2022) ‘Effects of a Digital Game-Based Course in Building Adolescents’ Knowledge and Social-Emotional Competencies’ in Games for Health Journal 11: 1.

Johnson, A. (2019) ‘Using Empathy Games in the Social Sciences’ 

Posted by Declarations on

Season 6 Episode 3 – Live Facial Recognition

The third episode of this season of the Declarations Podcast delves into the topic of live facial recognition. Host Maryam Tanwir and panelist Veronica-Nicolle Hera sat down with Daragh Murray and Pete Fussey, who co-authored the Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology in July 2019. Live facial recognition has been a widely debated topic in the past years, both in the UK and internationally. While several campaigning organisations advocate against the use of this technology based on the Prohibition of Discrimination set out in human rights law, independent academic research on the topic reveals important insights into trials of this technology. Our guests are at the forefront of this research, and present some of their findings in this episode.

We kick off the episode with definitional issues. What is facial recognition technology? Pete explains that, when we speak of “facial recognition”, we are in fact referring to several technologies: one-to-one technology (such as that used to unlock smartphones or to clear passport gates in airports) that do not need databases, and one-to-many systems like live facial recognition (LFR) that compares images of passers-by against databases. 

“[Live Facial Recognition] is widely seen to be more intrusive than other forms of surveillance.”

Pete Fussey

Some of this live facial recognition is retrospective, as in the US, and some takes place constantly in real-time (as in the CCTV used in Chinese cities). These different types of facial recognition, both guests emphasize, have different implications. One-to-many live facial recognition systems are seen as more “intrusive” and “intimate”, their impact magnified by their ability to process images for several thousand people in a day. 

[Live Facial Recognition] has the potential to be an extremely invasive tool and… to really change the balance of power between state and citizens.

Daragh Murray

Comparing LFR with past surveillance technologies reveals an exponential increase in state surveillance capacities. The East German “Stasi”, for instance – widely considered one of the most sophisticated surveillance apparatuses – had access to only a fraction of the information that can be collected today. That’s why consideration of the human rights impact of these technologies is essential.

As Pete notes, we have often begun to review new technology years after it is initially deployed, and tend to look at a single aspect rather than examining a technology more broadly. For instance, Pete highlights how we look at technologies with a focus on authorisation decisions, even though their uses are likely to change over time. Potential future uses, therefore, need to be factored into our analysis.

There needs to be proper thinking about the whole life-cycle of these technologies.

Pete Fussey

Veronica then asked our guests to discuss their recent research on the Metropolitan police’s practices. The Met has been trialing LFR since 2016, when it was first deployed during the Notting Hill carnival. Pete highlights how important it is to see technology into context: it is hard to anticipate the direction a certain technology will take, and it is only through use that we can see its nuances.

The report they co-authored blended sociology and human rights law. From a sociological perspective, their main finding is that the human adjudication that was considered essential to the technology’s compliance with domestic and international law was close to non-existent. As Pete told us, “There is adjudication, but it’s not meaningful”.

Regarding human rights, Daragh outlines the 3-part test used to evaluate a potential inconsistency with human rights law. From the human rights perspective, new technology ought to (1) comply with local law, (2) have a legitimate aim, and (3) be necessary in a democratic society. Human rights law is about protection from arbitrary intervention by the state.

There is adjudication, but it’s not meaningful.”

Pete Fussey

Regarding human rights, Daragh outlines the 3-part test used to evaluate a potential inconsistency with human rights law. From the human rights perspective, new technology ought to (1) comply with local law, (2) have a legitimate aim, and (3) be necessary in a democratic society. Human rights law is about protection from arbitrary intervention by the state.

From this perspective, the report’s main finding hinged on compliance with law. This is delicate, as there is no law directly regulating LFR; the only law that exists stems from common law, which stipulated that the police ought to “undertake activities necessary to protect the public.” How can this ensure that LFR is not deployed for arbitrary use? The report concluded that the Metropolitan Police’s deployment of LFR was most likely unlawful, as it probably does not comply with the UK’s Human Rights Act. Indeed, a court in South Wales found that a similar deployment by the South Wales police was unlawful. 

We concluded that it was unlikely the Met’s deployment of LFR would be lawful.

Daragh Murray

As for the third important test – necessity in a democracy – there are conflicting norms: protection vs. privacy. In short, you have to demonstrate the technologies’ utility against potential harm. In this circumstance, this would involve showing how LFR could be used to prevent crime.

There was also a lack of pre-deployment assessment. For instance, the widely-accepted fact that LFR technology has significant biases was never assessed. Pete highlights how the introduction of new technology is often smoothed through softened terminology: “it’s a trial,” for instance. The Met’s use of LFR, however, was a deployment rather than a trial.

So, how should LFR be used in the future, if it should be used at all? From a human rights approach, Daragh thinks what is most important is to consider every deployment on a case-by-case basis, and to recognize the difference between different technologies. He notes the difference between using LFR at borders against a narrow database of people who are known threats and deploying it at a protest. The latter is likely to have a chilling effect on the right to protest and a “corroding effect on democracy”. The most problematic deployment, of course, is the use of LFR via an entire CCTV network.

The risk is that we sleepwalk in a very different type of society.

Daragh Murray

Pete highlights how thinking about LFR technology from the perspective of data protection is too restrictive. Terrorism or child abuse are often invoked to justify deployment of this technology, but this does not fit with what our guests saw.

Both our guests argue that the biases built into the technology make its use fundamentally problematic, whatever the circumstances. As Pete says, it is a scientific fact that algorithms and LFR technology have several biases: gender, age, race. Knowing that, how can we introduce such technology in the public? 

Daragh also points to the impact these technologies have on the relationship between citizens and the police. Previously, the police might have used community-based policing to work with areas affected by crime and address problems as they arose. With mass surveillance, however, you monitor entire populations and perform checks based on reports provided by the surveillance.  

“We simply have no oversight or regulation of it. None.”

Pete Fussey

So, what is the public’s role in deciding whether LFR tech should be used? 

Pete argues that it is not so much about a majority of the public approving a technology, but rather its impact on the most vulnerable segments of the population. Indeed, it is challenging to form an opinion on these topics, given their highly technical nature. If a technology is sold as working perfectly, this influences the opinion we have of it. Daragh adds that prior community engagement is key: in Notting Hill, for instance, nothing was done to explain why the technology was being used.

“We overstate public opinion as a justification for the use of these technologies.”

Pete Fussey

Finally, we asked our guests whether LFR could be deployed in compliance with human rights? Daragh thinks that they could be, but only in very narrow cases – airports, for example. However, we do not have sufficient knowledge of the technology to give it a green light. Across a city or in specific public locations he doubts it can ever be compliant with human rights. The chilling effect is key: how will this tech allow people to grow freely, maybe outside the norms? Anything that has the potential to interfere with our freedom to build our own lives should not be implemented.

Nevertheless, Pete thinks that the technology will be used, regardless of how problematic it is, and recommends that implementation is at least temporarily paused so that we can try to implement as many safeguards as possible before a further roll-out.

Our Panelist:

Veronica is an MPhil student reading Politics and International Studies at the University of Cambridge. Her research is focused on public perceptions of trust in government across democracies and authoritarian regimes. She is originally from Romania but has completed her undergraduate degree at University College London. Her interest in human rights issues and technology stems from her work with the3million, the largest campaign organisation advocating for the rights of EU citizens in the UK.

Our guests:

Pete Fussey is professor of sociology at the University of Essex. Professor Fussey’s research focuses on surveillance, human rights and technology, digital sociology, algorithmic justice, intelligence oversight, technology and policing, and urban studies. He is a director of the Centre for Research into Information, Surveillance and Privacy (CRISP) – a collaboration between surveillance researchers at the universities of St Andrews, Edinburgh, Stirling and Essex – and research director for the ESRC Human Rights, Big Data and Technology project (www.hrbdt.ac.uk). As part of this project Professor Fussey leads research teams empirically analysing digital security strategies in the US, UK, Brazil, India and Germany.

Other work has focused on urban resilience and, separately, organised crime in the EU with particular reference to the trafficking of children for criminal exploitation (authored Child Trafficking in the EU: Policing and protecting Europe’s most vulnerable (Routledge) in 2017). Further books include Securing and Sustaining the Olympic City (Ashgate) Terrorism and the Olympics (Routledge), and a co-authored book on social science research methodology, Researching Crime: Researching Crime Approaches, Methods and Application (Palgrave). He has also co-authored one of the UK’s best selling criminology textbooks (Criminology: A Sociological Introduction) with colleagues from the University of Essex. He is currently contracted by Oxford University Press to author a book entitled “Policing and human rights in the age of AI” (due Spring 2022). 

Daragh Murray is a Senior Lecturer at the Human Rights Centre & School of Law, who specialises in international human rights law and the law of armed conflict. He has a particular interest in the use of artificial intelligence and other advanced technologies, particularly in an intelligence agency and law enforcement context. He has been awarded a UKRI Future Leaders Fellowship to examine the impact of artificial intelligence on individual development and the functioning of democratic societies. This 4 year project began in January 2020 and has a particular emphasis on law enforcement, intelligence agency, and military AI applications. Previous research examined the relationship between human rights law and the law of armed conflict and the regulation and engagement of non-State armed groups.

Daragh is currently a member of the Human Rights Big Data & Technology Project, based at the University of Essex Human Rights Centre, and the Open Source for Rights Project, based at the University of Swansea. He also teaches on the Peace Support Operations course run by the International Institute of Humanitarian Law in Sanremo.

He is on the Fight For Humanity Advisory Council, a NGO focused on promoting human rights compliance among armed groups. Daragh has previously worked as head of the International Unit at the Palestinian Centre for Human Rights, based in the Gaza Strip. In 2011, he served as Rapporteur for an Independent Civil Society Fact-Finding Mission to Libya, which visited western Libya in November 2011 in the immediate aftermath of the revolution.

Further reading

Pete Fussey and Daragh Murray (2019) ‘Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology’

Pete Fussey and Daragh Murray (2020) ‘Policing Uses of Live Facial Recognition in the United Kingdom’ in Amba Kak, ed., Regulating Biometrics: Global Approaches and Urgent Questions (published by the AI Now Institute)

Davide Castelvecchi (2020) ‘Is facial recognition too biased to be let loose?’ (featured in Nature)

Posted by Declarations on

Season 6 Episode 2 – Fortress Europe

In this week’s episode, host Maryam Tanwir and panellist Yasmin Homer discuss the role of technology in the securitization of European borders with MEP Patrick Breyer and researcher Ainhoa Ruiz (see bios below). It was 71 years ago that the 1951 UN Refugee Convention codified the rights of refugees to seek sanctuary and the obligation of states to protect them. It was in 2015 that Angela Merkel famously declared “wir schaffen das” – “we can do it.” Yet the International Organization for Migration has described 2021 as the deadliest year for migration routes to and within Europe since the last deadliest year, which was a very recent 2018. At least 1315 people died making the central Mediterranean crossing, while at least 41 lives were lost at the land border between Turkey and Greece. The creation of Fortress Europe is inserting technology into the heart of the human story of migration, as migrants uproot themselves escaping war, famine, political violence and economic instability to search for a better and safer life, undertaking an increasingly treacherous and unforgiving journey. What is the role of technology in the ongoing securitization of the EU’s borders? What are the implications for human rights? 

Fortress Europe starts ironically with a free movement agreement, which entails enforcing a harder exterior border. In some manner we are telling everyone that us inside this agreement are civilization and outside are barbarians.

Ainhoa Ruiz

The conversation starts off with a summary of the current situation at European borders. Ainhoa notes that ironically, the concept of Fortress Europe can be traced back to the Schengen Agreement, which enforced a division between those inside the zone who were granted free movement, and those outside, whose movement toward Europe was impeded. She also highlights how the notion of European border creates a division between “civilized” Europeans and “barbaric” outsiders, and points to Article 12 of the International Covenant on Civil and Political Rights, which guarantees a right to leave one’s country.

“We are not actually sure about what are they doing with this data. So, these technological walls affect migrants but is also affecting us inside the free movement area.

Ainhoa Ruiz

So, what do Europe’s walls look like? Today, there are three physical walls on EU border. As Ainhoa notes, however, walls are not only physical, but also “mental” and “administrative.” Technology constitutes another wall, not only for people trying to come in to the EU but also for the people living inside the wall, whose movement is being watched. The last “wall” is that constituted by Frontex, which acts as a European border police force, forcing migrants to take ever more dangerous routes. 

The European Union is pouring enormous amounts of resources and money into building this fortress

Patrick Breyer

Patrick highlights that we have just observed the Holocaust Remembrance Day, which ought to be an opportunity for us to collectively reflect on the fact we could all be refugees fleeing terror. For him, the starting point was the Syrian Civil War, when refugees were first perceived as a threat. This played into the hands of authoritarian parties, who were able to impose the theme in public debates, leading in turn democratic parties to follow suit. Both of our guests highlight a climate of fear: fear of globalization, fear of crime – and more. These concerns are compounded in issues of migration. 

They are starting to collect information about our plane travels, but they also want to expand it to train and ferry travels. They are using algorithms that evaluate the risk that we pose based on patterns, that allegedly indicate a risk if we have certain criteria in common with perpetrators in the past.

Ainhoa Ruiz

This fortress is buttressed by the collection of personal data, leading us toward a security society that has more traits in common with China than what we like to admit. Our “mere existence” – as our guest puts it – is surveilled under the pretence of preserving Europe’s security. The military complex has adapted to this situation and is providing these tools, which add up to a significant industry. 

Ainhoa reinforces how important the military industry’s role is in pushing for the adoption of these technologies, and notes that EU Member States are guilty of letting the industry participate closely in policy decisions. This contracting-out of security alienates accountability and makes the system opaque and removed from public scrutiny. 

Patrick recently won a transparency lawsuit against a security project called iBorderCtrl that was created to evaluate a border technology that forced people entering the EU to answer video questions. The twist is that the technology was supposed to leverage AI to detect lies. He explains how in his view this technology presents a grave problem from a human rights perspective, as the machine can never be reliable enough and would lead to unfair rejections at the border. This technology also runs the risk of being discriminatory: in the past previous face detection technology has proven to be less accurate for people of colour. If deployed, this technology would then be in the market and could be sold to authoritarian governments around the world. This technology has “enormous potential for abuse,” concludes our guest.

All this research happening in the dark, and they have recurrently been been funding, the development of crowd control and mass surveillance technologies. That this is really a danger to our free and open society.”

Patrick Breyer

Can technology and human rights be reconciled in any way? is a sort of more of an equitable balance that can be made? Or is it or is a whole new model completely needed? Ainhoa argues it’s too soon to know, but that clues point to an ever-increasing securitization of technology, with new killer drones invented that can take your life in complete disregard for any human right. Technology, and the companies that develop it, always seem to outpace democratic rulemaking with the complicity of policymakers who let lobbies make the rules. 

We need to stop and try to think about the consequences of all this technology. Technology and companies run faster than society… It is creating more insecurity than the insecurity it claims to fight.

Ainhoa Ruiz

Patrick highlights how the new generation is mobilized, as we see with the climate protests, and could change public discourse. Finally, he explains that living in a securitized environment does not guarantee little crime, far from it. Examples of the US and the UK show that securitization and security do not go hand in hand. Human rights are perfectly compatible with targeted investigation and with security, but they are currently under threat. It should be our role, he concludes, to defend an open and free society.

Our Panelist:

Yasmin is a second-year undergraduate studying History. She is studying Early Modern Eurasia with an interest in the importance of liminality and “borders” in forming socio-political and cultural identity. Originally from Buckinghamshire, she has engaged with human rights issues since secondary school. After graduating, she aspires to work with international governance concerning peace, gender and security.

Our guests:

Ainhoa Ruiz has been a researcher at the Centre Delàs d’Estudis per la Pau since 2014, with an interest in border militarisation, arms trading and private military companies. She received her doctorate for a thesis on the militarisation and walling of the border space, and has worked in both Colombia and Palestine. Her report “A Walled World, towards a Global Apartheid” warns of the expansion of the border space into both European states and third countries, linking the 1000km of physical walls to virtual walls of surveillance and discourses of violence.

Patrick Breyer is a Member of the European Parliament from the German Piratenpartei. A self-described “digital freedom fighter,” he was elected to the European Parliament in 2019, is an active member of the NGO Working Group on Data Retention, and a member of the Committee on Civil Liberties, Justice and Home Affairs. Patrick recently sought an order from the European Court of Justice to publicly release documents concerning iBorderCtrl, an artificial intelligence technology for scanning and detecting the emotions of migrants crossing EU borders.

Further reading

Fortress Europe: the millions spent on military-grade tech to deter refugees (The Guardian 2021)

Automated technologies and the future of Fortress Europe (Amnesty International 2019)

Fortress Europe: dispatches from a gated Continent (Matthew Carr 2016)

A Walled World: towards global apartheid (Ainhoa Ruiz, Mark Akkerman, Pere Brunet 2020)

Posted by Declarations on

Season 6 Episode 1 – Predictive Policing

For this week’s episode, host Maryam Tanwir and panelist Nanna Sæten speak about predictive policing with Johannes Heiler, Adviser on Anti-Terrorism Issues at the OSCE Office for Democratic Institutions and Human Rights (ODIHR) and Miri Zilka, Research Associate in the Machine Learning Group at the University of Cambridge. Predictive policing leverages the techniques of statistics and machine learning for the purpose of predicting crime. The human rights perspective provides several interesting questions for the use of predictive policing; as the technology functions today, it seems to perpetuate already existing bias in police work, but could this be overcome? Using technology for the purpose of police work necessitates questions of who is responsible for the protection of human rights and how to decide on whose human rights to uphold in the case of conflict. What is clear to both of our guests is that there needs to be clear channels of oversight if human rights are to be protected in digitized law enforcement.

All of these systems impact human rights.”

Johannes Heiler

This episode starts with a definition of the issue at hand. When we speak of predictive policing, we are usually referring to predictive models of the time of place where crime will happen and more generally to all models that attempt to predict crime that has yet to happen. However, Johannes notes it is important to distinguish predictive policing that aims to map crime hotspots, and models that attempt to predict crime at the individual level. 

“We don’t know is exactly how they work, we don’t know what type of info they take in, we don’t know the algorithms and most importantly we don’t know how they’re being used by the police

Miri Zilka

Can machine learning help us overcome the existing heuristic biases in policing? Does it not accentuate these existing biases? The issue with AI is that it tends to reify and reproduce human biases that went into the data. Where the police searches for crime there is a risk of additional bias, as the police tends to look for crime in certain areas more than others (victim reporting is not exempt from bias). There is preexisting overpolicing in certain neighbourhoods around the world, and this informs the tools that are used for predictive policing purposes in a feedback loop.

There are real risks that the datasets that are used in the analysis are tainted by discriminatory policing from the start”. “The bias reproduces itself in the machine learning in a feedback loop. The whole system is built to perpetuate and reinforce discrimination

 Johannes Heiler

However, does this mean that predictive policing is in of itself problematic, or simply that its current uses are problematic? Miri argues that the technology itself isn’t the problem, but that its current uses may be deemed problematic indeed. There are “safe uses” of the application that can help law enforcement address people in distress.

The public might accept the use of certain tools if they are shown that they reap significant benefits.”

Miri Zilka

Technology, while often presented as more neutral than human-led processes, is not necessarily so. Both our guests agree that technology reflects the biases of the people designing technological artefacts, something which applies to predictive policing software.

Our guests are then asked about why predictive policing focuses on petty crime rather than on white-collar crime? For both of our guests, some tools are already in place, but their uses are less controversial and thus receive less public attention. And even then, there are issues: for instance, bank account closed without notice and without reason.

It seems to our panelist and both of our guests that in recent years we are moving toward a more proactive type of policing rather than a reactive one. Under the pressure of terrorism, police departments across the world are increasingly trying to prevent crime from happening, rather than simply attempting to punish crime. However, as Johannes explains, “Preventing crime is also a human rights obligation of the state.” This shift thus makes sense, but there is also a price to it. In terrorism cases we target crime that is not yet committed, which raises a lot of issues. Can a crime be judged based on sole intent?

Bias is inherently human and if systems are built and we select data that machines should use and that will be used for training them than this influences the machine. Technology is presented as objective and unbiased but that isn’t true because it is also socially constructed

Johannes Heiler

On all of these topics, our guests are unanimous on one point: more oversight from policy makers and the public is needed. Technology makes trade-off decisions explicit. As Miri explains, “whatever those tradeoffs and decisions are, they shouldn’t be left to technologists and algorithm designers who don’t have the context or authority to make these decisions”. We also need more public involvement, people should know what these tools do and validate the system. We need to be able to demonstrate whether the system is doing what we want it to do.

The question is who decides are what safeguards are there? To change things for the better, we should ask how we can help the decision makers in decision making processes, rather than replace them. Johannes points to the problem of human use of the tool: border guards for instance don’t understand how their tools work, they haven’t participated to their design. According to him that is a problem: people should be aware of the system and the HR implications. If not, “they will just follow the decisions made by the tech”.

There is a need for independent oversight.

Johannes Heiler

Miri suggests that perhaps we should rethink our relationship with these technologies: they should be thought of as “binoculars” that help law enforcement see new things but does not remove the decision from officers.

On a more personal note, are our experts worried?

Johannes is worried about the experimental use of technology in general. This tech is being used in conjunction with other techs (facial rec, video analysis, automated license plate readers etc…). The evidence on the accuracy of these systems is not very clear and that is worrying as these tools are “high-risk”.

Very often things are implemented which are untested and where there are really serious concerns about their implications.

Johannes Heiler

Miri adds that technology does not necessarily mean things get better and that sometimes, it makes things worse—we should work much harder to make sure that the technology implemented is making things better. But to end on an optimistic note, she thinks that it is possible but needs cooperation between policy makers, public and law enforcement.

Statistics and data and technology can improve outcomes but you have to carefully make sure that is what’s happening because they can also make them worse.

Miri Zilka

Our Panelist:

Nanna Lilletvedt Sæten is a first-year PhD student in political theory at the Department of Politics and International studies, University of Cambridge. Her research centres around the politics of technology and time. Before coming to Cambridge, Nanna did her MSc on Arendtian violence at the University of Copenhagen and she has previously worked for the Norwegian Embassy in Dublin with issues at the intersection of technology and policy.

Our guests:

Johannes Heiler, Adviser on Anti-Terrorism Issues, OSCE Office for Democratic Institutions and Human Rights (ODIHR) is a human rights professional from Germany who serves as Adviser on Anti-Terrorism Issues in the Human Rights Department of ODIHR. He has worked at ODIHR in different capacities since August 2013, including in the implementation of projects to strengthen the protection of human rights defenders. From 2003 to 2013 he worked at Amnesty International in London, where he was primarily engaged in the human rights law and policy area and conducted advocacy work on a broad range of issues with international and regional human rights mechanisms and institutions, including the United Nations and the Council of Europe.

Miri Zilka is a Research Associate in the Machine Learning Group at the University of Cambridge where she works on Trustworthy Machine Learning. Her research centers around the deployment of algorithmic tools in criminal justice. Before coming to Cambridge, she was a Research Fellow in Machine Learning at the University of Sussex, focusing on fairness, equality, and access. Miri obtained a PhD from the University of Warwick in 2018. She holds an M.Sc. in Physics and a dual B.Sc. in Physics and Biology from Tel Aviv University. Miri was awarded a Leverhulme Early Career Fellowship to develop a human-centric framework for evaluating and mitigating risk in causal models, set to start in May 2022. She is a College Research Associate at King’s College Cambridge and an Associate Fellow at Leverhulme Centre for the Future of Intelligence. Miri is currently on a part-time secondment to the Alan Turing Institute.

Further reading

O’Neil, Cathy. Weapons of Math Destruction

Benjamin, Ruha. Race after Technology.

Noble, Safiya. Algorithms of Oppression.

Posted by Declarations on

Season 6 Launch episode – Keeping Up: Human Rights in an Age of New Technologies

The Declarations Podcast is back for its sixth season! In this episode we provide an overview of the topics we will be discussing in each of the season’s episodes. Maryam Tanwir, this season’s host, discusses these themes with our panellists, who each present what is at stake.

“Predictive policing contributes to reproducing existing patterns and diverting the focus towards, for example, property crimes and overlooking, for example, white collar crimes.”

The first episode we discussed looks at predictive policing. Predictive policing or predicting crime is not new, in the sense that society and law enforcement have tried to prevent criminal activities for centuries. But today, predictive policing entails leveraging techniques from statistics and machine learning to predict future criminal activity. Data and past criminal activity is used to train algorithms to essentially identify patterns, either hot zones for crime or individuals of interest. So the goal of predictive policing is to prevent crime and better allocate police resources to areas of interest with the idea that technology may help make the policing process fairer and more neutral, bypassing the heuristic bias of the individual police officer. There are a number of human rights issues with predictive policing as it functions today. The kind of data fed into the algorithm is not necessarily neutral, but reflects the past bias of recorded crime in any police registry. Thereby, predictive policing contributes to reproducing existing patterns and diverting the focus towards, for example, property crimes and overlooking offences such as white collar crimes. And this has led to over policing in disproportionate targeting of vulnerable populations, which has serious human rights implications and has led to massive protests. An example is that in early November 2021, the LAPD was forced to discontinue its use of the PredPol software following public outcry. In this episode of Declarations, we will be speaking to human rights experts and academics on the human rights implications of this emergent technology. What happens to the presumption of innocence in predictive policing? How can we secure the right not to be arbitrarily detained or targeted? How do we ensure equality before the law? What does it mean to prevent a crime before it has even been committed?

“The questions of who controls this data, how secure it is and how hard it is for it to be hacked into by various actors are of utmost importance

We then moved on to a preview of our episode looking at the collection of biometric data on refugees and delving into the case of the Rohingyas in Myanmar. The starting point is that in June, Human Rights Watch released a report stating that the UNHCR improperly collected Rohingya data and shared it with the Myanmar government. This spurred a wide debate about the way in which Rohingya data have been collected, and more generally about how biometric data are collected from refugees. The UN defends these practices as a more dignified way of registering refugees, one that is more secure and efficient to guard against potential fraud and double registration and that appeases concerns about national security that many donor countries have expressed. This is problematic from a human rights perspective. The questions of who controls this data, how secure it is and how secure it is from hacking by various actors are of utmost importance, as is the question of consent and power relations between aid agencies and the refugees. How much can refugees really give informed consent if they don’t know where their data is going? This is happening in lots of different places around the world, in Afghanistan, Kenya, Somalia, Syria, and Yemen as well as Bangladesh.

“These games are caveated by the fact that you can just switch your phone off at any time and tap out of the danger, which is something that is not possible if you’re a refugee”

The next episode we discussed will examine the question of video games which simulate a first-person perspective in refugee camps. Can these be an effective way of raising awareness about this experience and building empathy? Some of them use virtual reality, and radically put the player in the shoes of the refugee. There are games like one named “Bury me my love”, where you are inserted straight into the phone of an anxious husband, as he guides his wife, Nora, from Syria to Europe, modelled off of the published texts of a woman doing the same journey. Some of the other games use VR to give the player a real first-person perspective, and others let you play as an avatar, making life and death decisions throughout the camps. While the idea of these games is to educate people about the migrant experience, the dangerous phase, and the emotions felt, force us to ask how effective this really is at changing perceptions. They could be a fantastic education tool, but we have to ask whether this is not trivializing the refugee experience. These games are caveated by the fact that players can just switch their phones off at any time and tap out of the danger, which is not possible for refugees. In this light, can they really simulate what it would be like to feel the emotions of a refugee? Games are the largest form of media consumed at the moment and need to be seriously considered for their potential benefits like so many of these other technological solutions to human rights issues. It’s far more complicated than a black or white answer. 

“Since the turn of the century, migration has increasingly been cast as a security issue, rather than a human or social issue, with borders themselves becoming geopolitical zones of conflict.”

Following that, we moved to a preview of our episode on the securitization of the EU’s borders. Since the turn of the century, migration has increasingly been cast as a security issue, rather than a human or social issue, with borders themselves becoming geopolitical zones of conflict. What some call ‘Fortress Europe’ is a product of decades of investment in the securitization and militarization of Europe’s borders, whose operations are reinforcing the construction of the ‘safe’, internal space of Europe and an ‘unsafe’ external space, institutionalizing reactive suspicion to migrants and asylum seekers rather than humanitarian responsibility. This episode will ask about the relationship between such techno-solutionism and the prevalent discourses of violence and threats that surround migration into Europe. Are they entwined? Does one cause the other? Or are they simply coincidental in a digitalising world? What help or hindrance can the machine’s perspective bring to such a deeply human issue? We will be looking the legality and nuances of this technological development, including its potential challenge to Article Six of the European Convention on Human Rights, the right to a fair trial. An interesting case in this respect is currently before the European Court of Justice concerning video lie detectors being used on migrants crossing into Greece, Latvia and Hungary, which scan facial expressions as they respond to questions to see if migrants are ‘lying‘. We anticipate a result within the next few days of recording, something that will be interesting to return to. With the increasing automation of the border, more and more decisions – decisions on which someone’s life, health and security hinge – are being displaced from the human to the machine.

The main question we will aim to unpack in our discussion is whether live facial recognition is the path to a surveillance state, or whether it could be reconciled with human rights standard.”

The next episode on our agenda focuses on live facial recognition, a widely debated topic in the past years, both in the UK as well as internationally. Several organizations advocate against the use of this technology based on Article Eight of the Human Rights Act, which aims to protect the right to private life. Academic research on the topic takes a different approach by looking at both the advantages and the disadvantages of this technology in various contexts and focusing more on the public attitudes towards facial recognition. It aims to ask why citizens across countries have different views of how or whether this technology should be used. In short, the main question we will aim to unpack in our discussion is whether live facial recognition is the path to a surveillance state, or whether it could be reconciled with human rights standards. To explore this topic, we hope to bring a wide range of perspectives on the current use of like facial recognition by various institutions, both public and private. We will also ask ourselves which actors should have access to individual’s facial recognition biometric data – should it be the government or the police for security reasons? Could this be extended to private companies under any circumstances? We also seek to find out how much of a say should the public have on the use of this technology and whether or not they are sufficiently informed about it at the moment. Finally, and perhaps most importantly, what should our aims be regarding live facial recognition in the future? Is there a way to deploy it in a human rights compliant manner, or should it be abolished completely? 

Some American estimates say AI could displace a quarter of all jobs.

We then begin to explore a frequently discussed and contested aspect of artificial intelligence: its relationship with employment and how it is already and could continue to cause mass redundancies in many fields, which we will look at from a human rights perspective. Some American estimates say AI could displace a quarter of all jobs. While it will certainly create new jobs, its overall effect is still unclear: what is certain is that there will be a great shift in the job landscape. We will be considering whether human rights might be fundamental in the future, as we reconcile the progress of AI with the protection of employment, careers and workers. This topic brings up a lot of interesting issues, the answers of which aren’t really clear at all. One key issue is whether there is a human right to work in the first place, and whether AI replacing jobs on potentially a very wide scale undermines this right or breaches it. Do current international human rights instruments cater to this situation? If there is no such right, should there be? Even if we can say there is a relevant human right, what can governments across the world be expected to do to uphold this right? How do they protect jobs? Can we hope the progress of AI to protect workers? In a way there is a fundamental tension between balancing technological advances and the benefits they can bring with their impact on certain groups in society. 

We are going to be exploring this topic not just through an academic point of view, but also through on-the-ground experience, thinking about how women can protect themselves and the often-exploitative nature of the industry.

The conversation then moved onto our upcoming episode on deep fakes. Deep fakes are videos in which the face of the actor is swapped for another face. The person manufacturing the video is then able to control your facial expressions and what you do, which often results in those affected performing actions to which they have not consented. Deep fakes have gained a lot of popularity in recent years: during the 2020 elections we saw fake videos of Donald Trump saying outrageous things, or Mark Zuckerberg making some unsavoury comments. But what becomes extremely problematic is when we follow where the money goes, which isn’t to politics, but to the adult entertainment industry, and particularly the porn industry. What we’ve noticed is that research shows that 90 to 95% of deep fake videos online are actually non-consensual porn, and 90% of that is actually non-consensual porn involving women – a horrifying statistic. In this episode, we are going to be exploring this topic not just through an academic point of view, but also through on-the-ground experience, thinking about how women can protect themselves and the often-exploitative nature of the industry. This issue is especially important because since 2015 the UK has made revenge porn illegal, but current legislation does not encompass new technology such as deep fakes, leading the UK Law Commission to start a review process of the law.

The final episode we discussed will look at internet shutdowns in Pakistan. We will be speaking with Pakistani activists who are moving the needle, creating awareness about human rights and human rights violations.

The entire podcast team is looking forward to discussing these fascinating topics with our panellists and their guests. Stay tuned! 

Posted by Declarations on

Kathleen Schwind: Water Security and How to ‘Ignite Your Story’

In our final episode of the season we are delighted to be joined by Kathleen Schwind. A 2015 Coca-Cola Scholar, Kathleen focusses her research on the issues of water security in the Middle East and North Africa. She has studied at MIT and the University of Cambridge and joins our host, Muna Gasim, to discuss the problem of water shortage and its interaction with politics and international relations, as well giving advice on how to find your passion and make a positive change at any level. An insightful and inspiring conversation, this episode offers a microcosm for what Declarations has sought to achieve over the course of this season: shedding light on pressing problems in our world today and, through our guests, offering guidance on how to solve them. 

Growing up in rural California, Kathleen quickly became aware of the problem of water scarcity and the extent to which it could divide communities. She remembers her high school days where farmers, residents and senior local officials would argue and debate access to water. It is this that captured her attention and represents the foundations of her recent and ongoing research into the issues around water in the Israeli-Palestine conflict. The Joint Water Committee, formed as part of the 1995 Oslo Accords, was intended to be a temporary measure but quickly became one of permanent significance, with the reliance on political cooperation for continuous and safe water supplies in the region ensuring water cannot be forgotten when analysing the ongoing conflict. How the committee should be restructured and operate formed to the bulk of Kathleen’s research whilst she was at MIT but, as she and her childhood experiences inform us, issues of water are not confined only to areas on ongoing conflict, impacting the everyday life of people across the globe and from all walks of life. 

‘Water is a very political issue whether you like it or not’ 

Kathleen Schwind

In the midst of the ongoing Covid-19 pandemic, water scarcity has only grown in significance. Across much of the world the message has been to wash your hands regularly and thoroughly, raising the question: ‘what about those who do not have access to fresh water?’. It is in this current climate that Kathleen has seen an increase in the number of small organisations, local communities and entrepreneurs seeking to take the initiative and bring change about themselves. Bridging divides, such as those between Israelis and Palestinians, these people have partnered with their neighbours to try and make a positive impact. Not only demonstrating the pressing nature of water shortages, these projects and ambitions also exemplify the benefits of finding your passion and seeking to act upon it. 

It is at this point in the episode that Muna turns to discuss Kathleen’s scholarship. Growing up in a rural community where there were few opportunities for young people who were not blessed with athletic talent, Kathleen decided she wanted to change this. Launching the Gifted And Talented Educational Olympics (GATE Olympics) when she was in 4th grade represented an opportunity for children to show off their problem-solving and intellectual talents. Kathleen was later offered the role of a Coca-Cola Scholar, reflecting the positive impact she had had on her community, offering a chance for both competition and recognition to young people who previously been celebrated to that degree. 

The initiative and ambition Kathleen showed in creating the GATE Olympics is the focus of her new book ‘Ignite Your Story’. Recounting the lives of other Coca-Cola Scholars she has encountered, their passions and actions are shown to have improved the world around them. This not only heralds their achievements, but also offers the reader examples of how to make positive change. Details of the book and where to purchase it can be found below. 

Links to further information:
www.igniteyourstory.com 
https://www.igniteyourstory.com/our-story