In this episode, host Maryam Tanwir and panelist Archit Sharma discuss the impact of technology on employment with our guests, Martin Kwan and Dee Masters. Artificial Intelligence brings many promises, but to many it is a threat as well. As AI can increasingly perform tasks at a low cost, what happens to those whose jobs are displaced by robots? And if we are using AI in the workplace to monitor our employees and make recruitment decisions, how can we ensure workers’ rights are respected and that AI decisions are subject to sufficient oversight and accountability? This area is a complicated web of issues, but our guests have the expertise to help us better understand the stakes. Dee is a leading employment barrister at Cloisters Chambers with extensive experience in the intersection of Artificial Intelligence (AI) and employment who advises companies on how to ensure their AI systems are compatible with the law and the rights of workers. Martin is a legal researcher and journalist, and the 2021 UN RAF Fellow. He has written many articles on topical human rights issues, including a fascinating recent article on automation and the international human right to work.

We begin by examining whether the right to employment exists under international law. Our conclusion is that it does, and is inscribed in the Universal Declaration of Human Rights (Article 23), as well as in the European Social Charter. States party to the International Covenant on Economic, Social and Cultural Rights have an obligation (Article 16) to report the steps they undertake to protect these human rights to the UN committee on Economic, Social and Cultural Rights. Parties submit detailed periodic reports and take these seriously, as public exchanges with the UN Committee show. Governments are already starting to monitor the changes wrought by AI and the ‘Fourth Industrial Revolution.’ The Swiss government was, for instance, a pioneer in addressing this in their periodic reports. 

The whole review process is diligent and stringent. The scrutiny provides an incentive for states to showcase their efforts and commitment to the right to work.

Martin Kwan

Archit provides some context around AI’s potential impact on the labor market. The McKinsey Global Institute estimates that, by 2030, 30% of jobs will be taken by robots, whilst the World Economic Forum claims that AI will have replaced 85 million jobs worldwide across several industries by 2025. Martin agrees that AI threatens to imperil the right to work. It is up to states to come to terms with this and implement strategies to cope with AI-induced unemployment. One potential response is to ban AI outright: for instance, India has banned autonomous cars to protect millions of jobs

“Mass redundancies can be prevented if the government is willing to and able to do so. But certain jobs are simply not savable in some countries or in some sectors.”

Martin Kwan

However, it is not always desirable to save jobs at all costs. Companies can make important gains thanks to AI and governments have an incentive to promote the use of AI to improve economic performance. Globalized competition means that consumers will shift to goods produced using AI technology, which will be more competitive in price. Martin believes that technological change cannot be completely halted, and that the practical reality may force companies and governments to favor policies that seek to use more AI and automation.

Martin believes corporations have an ethical responsibility to consider the human rights impact of their activities. It can be futile to ask them to protect jobs at all costs, but part of their ‘economic, social and governance’ (ESG) agenda could integrate workforce sustainability. It is important to convince companies that workforce and competition are not at odds: if society becomes pauperized because of mass redundancies, companies’ profits will also be slashed. Martin is rather optimistic about the potential of mass redundancy to become a priority in the business community. 

Beyond the rights of workers whose jobs are threatened by AI, what are the rights of those who will not lose their jobs, but will see their working life reconfigured by AI? Dee provides us with her insights into this question, made all the more acute by the explosion of technology in employment relations in the context of the pandemic. This has led to a boom in the amount of worker data collected and the expanded use of AI tools to determine whether jobs should be slashed. 

This raises clear issues of discrimination, as we know that AI is subject to significant biases. Indeed, AI is fundamentally about stereotypes, about creating ideal-types of characteristics considered “positive” and “negative”. If you are outside its boxes – because of your appearance, for example – you may be at a severe disadvantage. As Dee tells us, AI is not only used to decide who to employ, but also who to dismiss. 

She argues that our anti-discrimination law can deal with problems raised by AI, but that transparency is key. We see this with job adverts: if you are a woman, you will not be shown some job adverts, but in most of the cases you cannot even know you are being discriminated against. Auditing code or impact is absolutely essential to bringing transparency, but Dee would like to go beyond and see companies detail the AI tools they use and explain their functions. AI is perceived as neutral, but it can also replicate biases or even be intentionally used to reinforce them.

“There is a marketing spiel out there which is rely on AI because machines aren’t biased. That’s very attractive but when you look into it in more detail you realize that’s not always true.”

Dee Masters

Dee believes that AI can be useful in several cases, for instance to identify skills or distribute work based on these skills. We should not, however, march along that path just because it is useful. 

Another issue in which AI threatens to affect human rights is the right to privacy. Dee explains how, in the context of the pandemic and working-from-home, AI was used to detect whether employees were working “hard enough”, using cameras or keyboard detection software to observe employees at all times. This is extremely intrusive and violates the right to privacy. In the US, organizations were found to be using machine learning to assess which employees were most at risk of Covid-19 in order to decide who to lay off. 

“We’ve crossed this line in which these technologies have become normalized. It’s here to stay and it will be hard to rewind on that.

Dee Masters

Once again, the laws we have in place are sufficient, but the issue is that legislators and employers are not up to speed on how existing legislation translates to new technology. For instance, with data, the GDPR does not include statements stating explicitly that you cannot discriminate in data collection and treatment, and therefore leaves room for partial interpretations. Dee argues we need to tighten legislation and understand how data cuts across many areas of our lives. Enforcement is key, particularly avoiding the “siloing” that currently prevents these issues from being taken up in some forums. “Legal protection is meaningless if we don’t know how to apply it,” Dee tells us.

“We need to be more creative not only about these rights but also how they’re going to be enforced.

Dee Masters

The employment relationship, based on personal trust, is fundamentally challenged by management via app. We can try to mitigate some of these effects by ensuring a human is involved at key junctures, without which we risk allowing unfair and discriminatory decisions. We know that AI is making decisions about dismissal; to Dee, “this is inconsistent with legal protections in this country.”

Dee hopes that, when cases start to be adjudicated, courts will find that these dismissals were unlawful. Until then, however, it’s a “brave new world”. The law will get there, but it will take time, and this is unsatisfactory to both employees and employers. Rather than change the law, the government first and foremost needs to explain it better. 

“People are waking up to the idea that AI and algorithms are making important decisions and they’re not liking it.

Dee Masters

We need to build trust and show that this technology can be used in ways that are compliant with human rights. Dee would advise workers targeted by AI to use all legal frameworks available to them, and there are many: the right not to be unfairly dismissed, the right not to be discriminated against, and more. People may not know how they’re being unfairly treated and that there are channels for remedy. 

We also need to pay more attention to the companies higher up the value chain, which design the AI tools but are largely left off the hook today. The EU is looking at introducing obligations at every level in the value chain, a move that Dee thinks could be usefully imported to the UK.

So, if AI should not be stopped completely, there are red lines: we need to evaluate clearly the limits of acceptability. For Dee, AI should not make critical decisions about people’s lives; humans should not only review the decision, but also own it. Then – and only then – can we leverage AI for the common good.

Our panelist:

Archit is an LLM student at the University of Cambridge. He previously studied Law as an undergraduate there, and in his final year wrote a dissertation on how (and to what extent) human rights are protected in emergencies. This research was greatly influenced by the COVID-19 pandemic, and has left Archit with a desire to engage more in the future with the question of how human rights can deliver on their promises.

Our guests:

Martin Kwan is a legal researcher and legal journalist. He is a 2021 UN RAF Fellow, and also an Honorary Fellow of the University of Hong Kong’s Asian Institute of International Financial Law. He has written and published many articles in recent years on topical and complex human rights issues, and one such article concerns Automation and the International Human Right to Work.

Dee Masters is a leading employment barrister with extensive practical experience in the technology space, especially in relation to artificial intelligence and its relationship with equality law, human rights, and data protection. She set up AI Law Consultancy with Robin Allen QC, which aims to help businesses navigate rapidly changing technological arena and the legal implications of using AI. She has written much on the intersection of law and technology, including co-authoring a highly influential report last year: ‘Technology Managing People – the legal implications.’

Further reading

Martin Kwan, ‘Automation and the International Human Right to Work’ (Emory International Law Review)

Dee Masters and Robin Allen QC, ‘Technology Managing People – the legal implications’ (Cloisters Chambers)

Calum McClelland, ‘The Impact of Artificial Intelligence – Widespread Job Losses’ (iotforall)

Lili Cariou, ‘How is Artificial Intelligence Shaping The Future of Work?’ (BusinessBecause)