Jennifer Cobbe: Confronting the Algorithmic State

[This is the final post in our symposium on all things algorithmic, AI and tech in administrative law. You can view earlier posts herehere and here. – Eds]

The UK’s recent fiasco with exams algorithms threw into sharp relief the fact that decisions about people’s lives, rights, interests, and entitlements are increasingly being made algorithmically. While this was concerning, it’s important to be clear that the algorithm worked as designed; Ministers knew in advance what the outcomes could be but saw no reason to intervene. Moreover, the exams algorithm was only one system producing one set of problems. Cardiff University’s Data Justice Lab found that, as of early 2019, automated systems were in use in many areas of public administration, including fraud detection, healthcare, child welfare, social services, and policing. In the UK’s emerging algorithmic state, the more vulnerable – those who need to access public services for welfare, housing, and immigration – are most likely to be harmed by automation. But many of these systems will, by their nature, be opaque, difficult to understand, and generally unaccountable. As public services are increasingly automated, the law must come to terms with the challenges of administration by algorithm.

The reality of automated decision-making

In many cases, the use of automated decision-making systems – in both the public and the private sectors – is grounded in the faulty idea that there is a technological solution to problems that are fundamentally socioeconomic or political in nature. This often comes hand-in-hand with questionable assumptions and quasi-ideological beliefs about the transformational power of data gathering and analytics technologies; similar (mis)conceptions about automation’s potential to bring increased accuracy and reduced bias often also play into automation strategies.

Sadly, though, the reality is not quite so straightforward. Yes, technology can sometimes be part of the solution, but simply throwing an algorithm at a policy problem – a choice often made more out of faith than reason or evidence – will not make that problem go away. Sometimes it will make the problem worse. As we saw with the exams fiasco, when implemented without great care, algorithms can even create whole new unforeseen problems that manifest in difficult and unpredictable ways.

This is partly because algorithms are reflexive – interpreting and reproducing the world according to the goals, interests, prejudices, and biases of their designers, deployers, and users. Indeed, although algorithms are at the heart of automated decision-making, when discussing algorithms as actually deployed and used, it doesn’t make much sense to talk of ‘algorithms’ existing independently of people at all. In reality, they are only one part of ‘algorithmic systems’: combinations of humans and technical components working together. Because algorithms are reflexive, understanding the human assumptions and priorities that go into the design, deployment, and use of the algorithm itself is fundamental to understanding how and why it works.

In the UK, algorithmic systems have often been developed for public administration in pursuit of values and metrics of efficiency and cost-saving without due regard to things that should be more important. Of course, no government should be spendthrift with public money, but cost-saving is not and can never be the overriding priority – the rule of law, human rights, and fundamental principles of good government should always be paramount.

Philip Alston, the former UN Special Rapporteur on extreme poverty and human rights, visited the UK and elsewhere in 2018 to assess the burgeoning digital welfare state. In his report, he concluded that automated systems were increasingly being used to “automate, predict, identify, surveil, detect, target and punish”. In Alston’s view, the people primarily affected by welfare digitalisation are subjected to conditions, demands, and intrusions that would never be accepted were they imposed on the better-off. Alston also expressed concerns that tech companies providing systems for the public sector operated in “an almost human-rights free zone”. While the methodical and systematic nature of automated decision-making may be perceived as more objective than humans, ideological pursuits are often encoded in the supposed scientific, statistical neutrality of automated systems, concealing values and assumptions which, as Alston argues, “are far removed from, and may be antithetical to, the principles of human rights”.

This is exacerbated by the fact that algorithms generally don’t treat people as individuals. When it comes to prediction and classification, algorithms rely on identifying general trends between groups that don’t tell you much about people themselves. Yet a fair distribution of outcomes across a population doesn’t necessarily produce a fair outcome for any one person. And where distinguishing features between groups correspond to protected characteristics, or produce decisions that systematically disadvantage a group that shares a protected characteristic, unlawful discrimination can easily occur. If a public body wants to assess broad population trends – where to allocate resources, for example – then an algorithm might, if used carefully, be one useful tool among others to support a Minister or an official in formulating policy. But trying to decide on individual outcomes for people will often be a fool’s errand – for any decision involving the rights, interests, or entitlements of a person, there’s a serious risk of unfairness, discrimination, and a failure of natural justice.

The limits of administrative law

Thankfully, administrative law can go some way towards grappling with these problems, particularly when combined with data protection law’s restrictions on the use of automated decision-making and on personal data. However, although these frameworks – and their various oversight mechanisms – provide a necessary starting point, they aren’t sufficient to ensure that vulnerable members of our society are adequately protected from algorithmically-inflicted harms.

Because algorithmic systems potentially reproduce problems across all those who are subject to their decisions, individual remedies can only take you so far. Most likely, if there is one person affected by a problem that requires correction then there will be others, yet one person challenging an individual decision often can’t get at those broader systemic effects. Individually actionable remedies also depend on an ideal of a motivated, resourced citizen who can make enough of a noise for a problem to be addressed. Yet a realistic history of public administration would suggest that many injustices go unaddressed because they affect vulnerable, marginalised, or disadvantaged people and communities.

Because of this, without reform, both administrative law and data protection law will struggle to contend with increasing automation in the algorithmic state. Although administrative law is tentatively recognising systemic unfairness, the distinction drawn between ‘bureaucratic’ judicial review and ‘policy’ review means that these broader systemic effects can be out of reach of complainants challenging individual decisions that affect them. So too are the broader organisational and policy processes involved in developing and deploying algorithmic systems, which are instrumental in determining how decisions are reached in a way that is without analogy in human decision-making. And judicial review as an adversarial, court-based mechanism may simply be unsuited to the investigative, technical nature of overseeing algorithmic processes.

Similarly, data protection law, despite the social nature of personal data and the systemic harms that can result from its processing, treats people as individual data subjects. GDPR and the Data Protection Act afford data subjects various rights that they can exercise as individuals, leaving few mechanisms for addressing systemic issues beyond an overworked, under-resourced, and ineffective Information Commissioner’s Office.

Ethics, accountability and systemic change

Beyond law, there has been much talk of ethics in relation to data, algorithms, and AI. But ethical technology is impossible when the real human needs of those affected by decisions are a second-order issue and priorities of efficiency and cost-cutting are overriding. Where they are thoughtfully developed, widely adopted, and generally adhered to, ethical standards derived from moral philosophy might help in driving the cultural change within institutions that is undoubtedly necessary. So too might an algorithmically focused guide for those commissioning, designing, deploying, and using automated decision-making systems, akin to the Judge Over Your Shoulder for the algorithmic state. But these can only go so far without the backing of law and judicial enforcement.

To properly confront the challenge of the algorithmic state, new legal concepts, frameworks, and oversight mechanisms may be needed for the problems of an algorithmic system applied across many decisions, with limits on when algorithmic systems can and can’t be used, and an oversight body empowered to investigate complaints. The Parliamentary and Health Service Ombudsman would be an obvious option, given its role in considering complaints about public bodies, provided it was properly resourced. But it may be the case that an independent regulatory body is needed to oversee the growing algorithmic state, tasked at least with taking up individual or aggregate complaints and investigating problems systemically

Accountability mechanisms also need to be built into algorithmic systems themselves. There has in recent years been much focus on explanations for algorithmic decisions. However, in many ways, explanations are something of a distraction – so too can be calls to see the algorithm itself. The problems with algorithmic systems are fundamentally human in nature, and people are responsible for them. Any hope of understanding how and why these systems were commissioned, designed, deployed, and operated needs technical and organisational record-keeping and logging mechanisms across the whole automated decision-making process – from conception right through to the consequences of individual decisions – so that these socio-technical algorithmic processes can be properly audited, investigated, and reviewed.

It’s important to emphasise, though, that issues with automated decision-making in public administration aren’t just technical or bureaucratic problems that can be addressed with a few tweaks. Systemic change is needed, and making that change requires experts with a diverse range of technical, legal (academic and practising), social science, and policy backgrounds to work together to figure out how best to ensure that people are protected. Importantly, systemic change also needs serious and sustained engagement from a public sector that genuinely wants to make sure that problems like those caused by the exams algorithm don’t happen again. That, sadly, might be the biggest barrier of all.

The increasing use of algorithms in public administration raises a fundamental question – what kind of state do we want? A faceless algorithmic bureaucracy, impersonal and alienating, treating us not as people with hopes and dreams and needs but as data points to be normalised or fitted to a curve, with no regard for the real, human consequences of decisions? The anguish and anger generated by the exams disaster suggests not. If public sector automation programmes are to maintain credibility and public trust and serve all of the people, a fundamental cultural change is needed. But the law must also reform to ensure that people are protected, or legal regimes that have served well in the past will increasingly struggle to contend with this new reality. As administration is increasingly automated, administrative law faces new challenges that urgently need a response.


Dr Jennifer Cobbe is a Research Associate and Affiliated Lecturer in the Department of Computer Science and Technology at Cambridge. (Twitter: @jennifercobbe)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s