In 2019, the Deepfake detection platform Sensity came out with a report that identified 96% of deepfakes on the internet as pornographic, with 90% of these representing women. Deepfakes are a modern form of synthetic media created by two ‘competing’ AIs, with the goal of replicating hyper-realistic videos, images, and voices. Over the past five years, this has led to major concerns about the technology being used to spread mis- and disinformation, carry out cybercrimes, tamper with human rights evidence, and create non-consensual pornography. In this episode, the last of this season of the Declarations podcast, host Maryam Tanwir sat down with panellist Neema Jayasinghe and Henry Ajder. Henry is not only responsible for the groundbreaking Sensity report, but is also a  seasoned expert on the topic of deepfakes and synthetic media. He is currently the head of policy and partnerships at Metaphysic.AI.

Neema and Henry start with the question of definition. ‘Deepfakes,’ Henry tells us, can be defined as “AI-generated synthetic media, such as highly realistic synthetic videos, texts, etc.” There are legitimate uses of synthetic media technology, but the term ‘deepfake’ generally refers to malicious uses, such as those made for pornography. This phenomenon emerged in 2017 on Reddit on a subreddit of the same name, which was dedicated exclusively to swapping famous female faces into pornographic films. Back then, this was technically challenging; you needed a lot of skills and processing power. Today, the tools are much more accessible and even gamified – models are pre-trained, and you only need a few images. 

As it becomes more accessible people are no longer focusing as much on celebrities and moving more toward private individuals they know in daily and this has led to scaling in terms of victims.

Henry Ajder

Neema then asks what kind action can be taken to regulate deepfakes. Henry thinks the difficulty comes from the definition. If you are talking about synthetic images, regulation is an unrealistic prospect as there are so many aspects of our life that use such images: cinema, Snapchat filters, and more. So, according to Henry we should focus on malicious uses. The problem here is identifying culprits, and hoping they are in a jurisdiction where deepfakes are criminalized.

“This is truly a global issue, and countries around the world are trying to take action, but there is a question as to whether we are giving people false hopes.

Henry Ajder

Another problem is that, with technological progress, it is likely that these operations will require less and less data in the future. For instance, nudifying technology is increasingly accessible and will become widespread in the future. Henry is particularly worries about students, as they are generally tech-savvy and know how to use these tools. He is worried that young people – in particular, young women – are vulnerable.

Neema asks whether it would be good to bring these topics up in school, for instance in the context of sexual education. Henry thinks that schools are one of the places where deepfakes are most problematic, even if sometimes seen as “just fun” or “just fantasy.” As such, education on the damages could be useful. It is key to teach the younger generation that these technologies are profoundly harmful and cannot be construed as fun, even if they are not yet criminal. Henry is also deeply concerned about the way children are involved in these deepfakes, both as victims and perpetrators. 

“Making it clear that this is a form of digital sexual violence is key.

Henry Ajder

Could legitimate deepfake pornography be created – for instance, if a sex worker wanted to license their face? While an interesting question, Henry worries that the risks of misuse will always be very high, potentially obliterating any potential for legitimate use. Only though a mechanism such as biometric authentication and informed consent from all parties could such a system be safe and avoid misuse. 

Another issue is that it is basically impossible to check whether your image has been used against your will. When writing the report, Henry traced some of the videos back to their origins; after warning those involved of the malicious use that had been made of their face, he realized that most of them did not know their images were being used. If deepfakes are not used as weapons, victims generally don’t know they have been deepfaked. There is also a legal question over whether creating these fakes without sharing them should also be criminalized (Henry believes so).

“Can you build these systems in a way that avoids misuse? I typically think it would be difficult to do so.

Henry Ajder

Although the bulk of deepfakes concern women, there are also cases of men, in particular homosexual men, being targeted, especially in countries where homosexuality is banned or stigmatized. In such cases, deepfakes can literally be a question of life and death for the men whose images are used. Being pragmatic, Henry thinks one of our best bets is to push this technology to the dark corners or the Internet, and to make it clear that people who engage with it are engaging in criminal activity. 

“There was no doubt that the vast majority of these people had no idea they had been targeted.

Henry Ajder

Our panelist:

Neema considers herself to be incredibly privileged to have been able to work with those worst affected by society and governance over the years, which has fuelled her passion for Human Rights, an area in which she hopes to make a difference at both a policy and grassroots level. Neema has often found herself working in community development projects in Africa, especially Uganda and Tanzania, both in consultancy projects and NGO work. This inspired her to become the current President of the Afrinspire Cambridge Student Society and the fundraising officer for the Cambridge Hub. Years of community service led Neema to later establish her own education-based NGO in Sri Lanka. She is incredibly passionate about international development, the politics behind it and policy. It’s this that encouraged Neema to study Education, Policy and International Development at Cambridge.

Our guest:

Henry Ajder is a seasoned expert on the topic of deepfakes and synthetic media, he is currently the head of policy and partnerships at Metaphysic.AI and also co-authored the report ‘Deeptrace: The State of Deepfakes’ while at Sensity. This was the first major report published to map the landscape of deepfakes and found that the overwhelming majority are used in pornography. He is also a graduate of the University of Cambridge and is an experienced speaker, frequently presenting keynotes, panels, and private briefings. He is also an established media contributor, regularly featuring on the BBC, The New York Times, Vox, The Guardian, Wired, and The Financial Times.

Further reading

Ajder, H., Patrini, G., Cavalli, F., & Cullen, L. (2019). Deeptrace: The State of Deepfakes Landscape, Threats, and Impact (Sensity’s 2019 report)

Beres, D. (2018) Pornhub continued to host “deepfake” porn with millions of views, despite promise to ban (Mashable)

Cole, S. (2017) AI-Assisted Fake Porn Is Here and We’re All Fucked (Vice)

Gregory, S. (2021) ‘Deepfakes, misinformation and disinformation and authenticity infrastructure responses: Impacts on frontline witnessing, distant witnessing, and civic journalism. Journalism.

Harris, D. (2019). Deepfakes : False Pornography Is Here and the Law Cannot Protect You. Duke Law & Technology Review

Mirsky, Y., & Lee, W. (2021). The Creation and Detection of Deepfakes. ACM Computing Surveys.

Yadlin-Segal, A., & Oppenheim, Y. (2021). Whose dystopia is it anyway? Deepfakes and social media regulation. Convergence.