For this week’s episode, host Maryam Tanwir and panelist Nanna Sæten speak about predictive policing with Johannes Heiler, Adviser on Anti-Terrorism Issues at the OSCE Office for Democratic Institutions and Human Rights (ODIHR) and Miri Zilka, Research Associate in the Machine Learning Group at the University of Cambridge. Predictive policing leverages the techniques of statistics and machine learning for the purpose of predicting crime. The human rights perspective provides several interesting questions for the use of predictive policing; as the technology functions today, it seems to perpetuate already existing bias in police work, but could this be overcome? Using technology for the purpose of police work necessitates questions of who is responsible for the protection of human rights and how to decide on whose human rights to uphold in the case of conflict. What is clear to both of our guests is that there needs to be clear channels of oversight if human rights are to be protected in digitized law enforcement.

All of these systems impact human rights.”

Johannes Heiler

This episode starts with a definition of the issue at hand. When we speak of predictive policing, we are usually referring to predictive models of the time of place where crime will happen and more generally to all models that attempt to predict crime that has yet to happen. However, Johannes notes it is important to distinguish predictive policing that aims to map crime hotspots, and models that attempt to predict crime at the individual level. 

“We don’t know is exactly how they work, we don’t know what type of info they take in, we don’t know the algorithms and most importantly we don’t know how they’re being used by the police

Miri Zilka

Can machine learning help us overcome the existing heuristic biases in policing? Does it not accentuate these existing biases? The issue with AI is that it tends to reify and reproduce human biases that went into the data. Where the police searches for crime there is a risk of additional bias, as the police tends to look for crime in certain areas more than others (victim reporting is not exempt from bias). There is preexisting overpolicing in certain neighbourhoods around the world, and this informs the tools that are used for predictive policing purposes in a feedback loop.

There are real risks that the datasets that are used in the analysis are tainted by discriminatory policing from the start”. “The bias reproduces itself in the machine learning in a feedback loop. The whole system is built to perpetuate and reinforce discrimination

 Johannes Heiler

However, does this mean that predictive policing is in of itself problematic, or simply that its current uses are problematic? Miri argues that the technology itself isn’t the problem, but that its current uses may be deemed problematic indeed. There are “safe uses” of the application that can help law enforcement address people in distress.

The public might accept the use of certain tools if they are shown that they reap significant benefits.”

Miri Zilka

Technology, while often presented as more neutral than human-led processes, is not necessarily so. Both our guests agree that technology reflects the biases of the people designing technological artefacts, something which applies to predictive policing software.

Our guests are then asked about why predictive policing focuses on petty crime rather than on white-collar crime? For both of our guests, some tools are already in place, but their uses are less controversial and thus receive less public attention. And even then, there are issues: for instance, bank account closed without notice and without reason.

It seems to our panelist and both of our guests that in recent years we are moving toward a more proactive type of policing rather than a reactive one. Under the pressure of terrorism, police departments across the world are increasingly trying to prevent crime from happening, rather than simply attempting to punish crime. However, as Johannes explains, “Preventing crime is also a human rights obligation of the state.” This shift thus makes sense, but there is also a price to it. In terrorism cases we target crime that is not yet committed, which raises a lot of issues. Can a crime be judged based on sole intent?

Bias is inherently human and if systems are built and we select data that machines should use and that will be used for training them than this influences the machine. Technology is presented as objective and unbiased but that isn’t true because it is also socially constructed

Johannes Heiler

On all of these topics, our guests are unanimous on one point: more oversight from policy makers and the public is needed. Technology makes trade-off decisions explicit. As Miri explains, “whatever those tradeoffs and decisions are, they shouldn’t be left to technologists and algorithm designers who don’t have the context or authority to make these decisions”. We also need more public involvement, people should know what these tools do and validate the system. We need to be able to demonstrate whether the system is doing what we want it to do.

The question is who decides are what safeguards are there? To change things for the better, we should ask how we can help the decision makers in decision making processes, rather than replace them. Johannes points to the problem of human use of the tool: border guards for instance don’t understand how their tools work, they haven’t participated to their design. According to him that is a problem: people should be aware of the system and the HR implications. If not, “they will just follow the decisions made by the tech”.

There is a need for independent oversight.

Johannes Heiler

Miri suggests that perhaps we should rethink our relationship with these technologies: they should be thought of as “binoculars” that help law enforcement see new things but does not remove the decision from officers.

On a more personal note, are our experts worried?

Johannes is worried about the experimental use of technology in general. This tech is being used in conjunction with other techs (facial rec, video analysis, automated license plate readers etc…). The evidence on the accuracy of these systems is not very clear and that is worrying as these tools are “high-risk”.

Very often things are implemented which are untested and where there are really serious concerns about their implications.

Johannes Heiler

Miri adds that technology does not necessarily mean things get better and that sometimes, it makes things worse—we should work much harder to make sure that the technology implemented is making things better. But to end on an optimistic note, she thinks that it is possible but needs cooperation between policy makers, public and law enforcement.

Statistics and data and technology can improve outcomes but you have to carefully make sure that is what’s happening because they can also make them worse.

Miri Zilka

Our Panelist:

Nanna Lilletvedt Sæten is a first-year PhD student in political theory at the Department of Politics and International studies, University of Cambridge. Her research centres around the politics of technology and time. Before coming to Cambridge, Nanna did her MSc on Arendtian violence at the University of Copenhagen and she has previously worked for the Norwegian Embassy in Dublin with issues at the intersection of technology and policy.

Our guests:

Johannes Heiler, Adviser on Anti-Terrorism Issues, OSCE Office for Democratic Institutions and Human Rights (ODIHR) is a human rights professional from Germany who serves as Adviser on Anti-Terrorism Issues in the Human Rights Department of ODIHR. He has worked at ODIHR in different capacities since August 2013, including in the implementation of projects to strengthen the protection of human rights defenders. From 2003 to 2013 he worked at Amnesty International in London, where he was primarily engaged in the human rights law and policy area and conducted advocacy work on a broad range of issues with international and regional human rights mechanisms and institutions, including the United Nations and the Council of Europe.

Miri Zilka is a Research Associate in the Machine Learning Group at the University of Cambridge where she works on Trustworthy Machine Learning. Her research centers around the deployment of algorithmic tools in criminal justice. Before coming to Cambridge, she was a Research Fellow in Machine Learning at the University of Sussex, focusing on fairness, equality, and access. Miri obtained a PhD from the University of Warwick in 2018. She holds an M.Sc. in Physics and a dual B.Sc. in Physics and Biology from Tel Aviv University. Miri was awarded a Leverhulme Early Career Fellowship to develop a human-centric framework for evaluating and mitigating risk in causal models, set to start in May 2022. She is a College Research Associate at King’s College Cambridge and an Associate Fellow at Leverhulme Centre for the Future of Intelligence. Miri is currently on a part-time secondment to the Alan Turing Institute.

Further reading

O’Neil, Cathy. Weapons of Math Destruction

Benjamin, Ruha. Race after Technology.

Noble, Safiya. Algorithms of Oppression.