Earlier this month, Lord Sales delivered an important speech on ‘Algorithms, Artificial Intelligence and the Law.’ His focus was how the law should adapt to a world in which algorithms – sets of digital rules designed to take decisions – mediate so many of our interactions with other people and with government. He exhorted lawyers to begin ‘engaging with the debates about the digital world now, and as a matter of urgency.’ In that spirit, this post reflects on Lord Sales’ remarks and how public law might adapt, or already be adapting, to the use of algorithms in administration.
Why should we care about algorithmic government now?
Lord Sales describes two futures. In one future, algorithmic government is more efficient, more targeted, and more accessible. Algorithms make decisions more cheaply and more quickly than human beings, enabling government to divert ‘the savings into more generous benefits.’ They ostensibly help government to target services more precisely to help those in need. And the digitalisation of courts and tribunals ‘offers the potential to improve access to justice and greatly reduce the time and cost taken to achieve resolution of disputes.’
In the alternative future, algorithmic government is a nightmare of inscrutable and unrelenting power. People are subjected to ‘an infernal machine over which they have no control and which is immune to any challenge, or to any appeal to have regard to extenuating circumstances, or to any plea for mercy.’ Government by algorithm reinforces biases based on race, gender, sexuality and class, and neglects broader ideals.
We can see aspects of both possible worlds in our government today. ePassport gates make it quicker and easier for people to enter the United Kingdom, but only if they are from the UK, the EU or other ‘low-risk countries’. The digitised welfare state is said to be ‘efficient’, ‘effective’ and ‘tailored to individual circumstances’, but leaves many people feeling like they are stuck in a ‘black hole’. The early evidence is raising red flags. In a recent thematic report on digital welfare states, the UN Special Rapporteur on Extreme Poverty observed that, despite the ‘irresistible attractions for governments’ to use automated technologies, there is ‘a grave risk of stumbling zombie-like into a digital welfare dystopia.’ The House of Commons Justice Committee also just reported on the UK’s attempt to introduce online courts and tribunals in less than glowing terms: ‘Courts service modernisation, including use of better IT to be more efficient, is long overdue. But we have found that poor digital skills, limited access to technology and low levels of literacy and legal knowledge raise barriers against access to new services provided by digital means.’
Public law has an integral role to play in determining whether our algorithmic future ends up more positive than negative. As Lord Sales put it, the ‘law has to provide structures so that algorithms and AI are used to enhance human capacities, agency and dignity, not to remove them. It has to impose its order on the digital world and must resist being reduced to an irrelevance.’
One response to algorithmic government might be to require algorithms to directly respect ‘human values.’ There is a large and growing body of research into whether and how robots can be programmed to act ethically. But Lord Sales correctly recognises that algorithms which are capable of full-blooded moral reasoning remain a distant possibility, and their mainstream use in the public sector is even more remote. His argument is that we ought to focus on the empirical reality of ‘where we are located at the present.’ The present use of algorithms in government largely consists of ‘crude forms of algorithmic coding’ which are not always capable of considering all relevant factors bearing on a decision, or relaxing a rule to meet the ‘equity’ of the case.
Lord Sales’ emphasis on the current empirical realties of digital administration—and not on imagined futures—is welcome. It is no doubt important to keep an eye on longer-term developments in machine ethics and strong artificial intelligence. But for public lawyers, the primary focus must be on how government uses technology now, and how legal frameworks and systems can be used or adapted to promote and protect the ‘human values’ which are already at risk.
Lord Sales canvases a range of options to respond to present challenges. We engage here with three areas that we consider to be of critical importance to public lawyers: understanding of algorithmic systems in the public sector; how public law principles ought to apply to algorithmic systems; and mechanisms for challenging the use of such systems.
Understanding of algorithmic systems in the public sector
One potential response to algorithmic government is what Lord Sales calls the ‘republican response:’ ‘arming citizens with individual rights’ so that they can challenge government interference with their interests. Here, the idea is a proactive and empowered public with the ability to participate in and ultimately change systems.
Lord Sales identifies two key problems with this ‘republican response.’ The first is digital illiteracy (which is better conceived of as system opacity). Most people, including most lawyers, have only a superficial understanding of how algorithms work and how government uses them to make decisions. The second is secrecy. Government often conceals the details of how it uses automated systems, on the basis that disclosure would reveal its suppliers’ trade secrets or enable people to abuse those systems. For example, the Home Office refuses to tell applicants to the EU Settlement Scheme why its automated system has been unable to confirm their continuous UK residence, because of ‘the risk of identity theft and abuse.’
The reasons for opacity in algorithmic systems go beyond those given by Lord Sales but his examples demonstrate that we need more than individual citizens willing to participate and challenge systems. Lord Sales concludes that this fact renders the republican response alone inadequate because ‘if the asymmetries of knowledge and power are so great that citizens are in practice unable to deploy their rights effectively.’
This conclusion leads Lord Sales to propose the creation of an ‘algorithm commission’: an independent group of experts tasked with reviewing proposed algorithms, at ‘the ex ante design stage,’ to ensure that they promote the public interest. The commission would generally operate in private, reviewing ‘commercially sensitive code on strict conditions that its confidentiality is protected.’
This proposal raises several interesting and important points, and deserves more thorough attention than we can give it here. However, we question whether Lord Sales’ response to the problem of secrecy concedes too much ground to government and its commercial suppliers. If government is unwilling to disclose the details of its automated systems, the better response might be that it simply cannot use those systems. Consider government policies. A policy is, in substance, a set of rules for the decision-maker to follow in exercising a discretionary power: the factors to be considered, the weight to be given to various factors, the procedure to be followed, and so on. In R (Lumba) v Secretary of State for the Home Department, Lord Dyson emphasised the importance of making such policies public: ‘it is in general inconsistent with the constitutional imperative that statute law be made known for the government to withhold information about its policy relating to the exercise of a power conferred by statute.’ That imperative would seem to apply whether the rules are written down in a traditional policy or encoded in an algorithm.
It is also doubtful that an ‘algorithm commission’ would entirely resolve the problem of understanding how algorithmic systems are working. Lord Sales recognises that ‘ex post challenges’ to automated systems would still be necessary ‘to allow for correction of legal errors and the injection of equity and mercy.’ People must be empowered to identify automated decisions that affect them and obtain assistance in asserting their rights, and public lawyers must be equipped to advise on and litigate automated decisions. The Digital Freedom Fund noted recently that lawyers have ‘a clear sense of urgency’ about the implications of automated government, but have struggled to identify ‘issues on which to litigate and suitable entry points to do so’. Public lawyers, as a professional community, must become familiar with these systems and the evidential challenges involved if they are to avoid Lord Sales’ dystopian picture of algorithmic government (for a detailed analysis of the role of evidence in judicial review of automated decision-making, see here).
How public law principles ought to apply to new algorithmic systems
Another key question for public lawyers is determining the substantive legal principles that should regulate the government’s use of algorithms? There is a spectrum of possible responses here, ranging from a view that traditional public law principles ought to apply in the ordinary way to a view that a new set of principles are required (perhaps in the form of a statute). Adopting a position somewhere in the middle of that spectrum, Lord Sales argues that there is no need for a dramatic renovation of public law. He suggests that what is required is ‘really no more than an extension’ of established principles of judicial review. Lord Sales refers specifically (albeit briefly) to the emerging doctrine of structural procedural review, under which policies and systems may be unlawful if they give rise to an unacceptable risk of unlawful or unfair decision-making. In this he echoes the recent comments of Carol Harlow and Rick Rawlings, who ‘see significant potential in structural procedural review for judicial engagement in this brave new world’ of automated government.
Assuming the correct starting position is that traditional principles of judicial review ought to be applied, this raises at least two questions for public lawyers. The first is essentially a doctrinal question: do the grounds as generally understood translate to practices of automated decision-making? Some may translate easily but there is an awkward fit in multiple areas. Consider, for example, the algorithm used by Durham police to help them decide whether someone under arrest should referred to a rehabilitation program as an alternative to prosecution. The algorithm purports to assess the risk that the person will reoffend in future based on 34 variables, including the person’s postcode:
- Is it permissible for a person’s postcode to enter into this decision at all? There may well be a strong correlation at the population level between certain postcodes and criminal offending. But it seems an open question whether a public official can use such a correlation to justify a decision about an individual’s risk of recidivism, consistently with public law principles of rationality and relevancy.
- How does any one variable actually enter into the police’s ultimate decision? Because of the algorithm’s complexity, a person’s postcode ‘has no direct impact on the forecasted result, but must instead be combined with all of the other predictors in thousands of different ways before a final forecasted conclusion is reached’. In other words, even if a person’s postcode is legally irrelevant, it might be difficult to demonstrate that the police actually considered it in making their decision. A structural procedural challenge might avoid this problem, however. If the algorithm creates an unacceptable risk that an irrelevant factor will be considered in the decision-making process, this may be sufficient to establish its unlawfulness.
There are many difficult issues to be worked through in this area. Academics like Jennifer Cobbe, among others, have begun work on this important task and it is a conversation which deserves more attention.
A second question pertains to the underlying purpose of the judicial review grounds. The assumed rationales for principles such as the duty to give reasons—i.e. that the process of reason-giving focuses the decision-maker’s mind, is more likely to lead to accurate outcomes, and treats individuals with dignity—may be difficult to translate (at least straightforwardly and without modification) to automated decision-making processes. At the same time, the traditional justifications for not imposing a general duty to given reasons—such as the burden on public officials to provide reasons—can be undermined or even fall away when a system is automated. As the justifications for principles shift, so too may the principles.
The judicial response to these questions across time will likely turn upon the complexities that automation presents for the application of current principles and the extent of the courts’ institutional ability, constitutional competency, and simple willingness to flexibly apply principles to this new context. No doubt pragmatic considerations will be central too—if justice for individuals is not being provided through other mechanisms, the practical need for the courts to fashion a public law remedy will be a factor.
Mechanisms for challenging algorithmic systems
Algorithmic government poses challenges not just for substantive public law principles, but also for public law procedures such as judicial review. A key issue is evidence. Lord Sales argues that litigants will need to ‘educate’ judges on how particular algorithms work through expert evidence, which ‘will be expensive and time consuming, in ways which feel alien in a judicial review context.’ On this point, it is important to note that evidence, including expert evidence, is becoming increasingly common in judicial review, particularly in cases of structural procedural review. This trend makes review of automated systems less of an outlier in this respect. But it is important to scrutinise whether existing procedures are fit for purpose in the digital age. Lord Sales proposes a system whereby a court could refer an algorithm for ‘neutral expert evaluation,’ which might well alleviate some of these problems. It would not assist a person who is deciding whether to launch a challenge in the first place, however. As Cobbe has pointed out, it may be very difficult (not to mention costly) for a prospective litigant to obtain and assess the relevant data within the three-month time limit for judicial review. This is just one example of how automated decision-making may lead to a rethinking of judicial review procedure. Claimant-side innovation will be necessary but not sufficient—there will be a need for wider reconsideration of the judicial review procedure.
Beyond judicial review there is the important question of how can we provide for efficient and effective review of individual decisions by automated systems when those systems might be making thousands or millions of decisions every year. Lord Sales correctly suggests that ‘[i]t will not be possible to have judicial review in every case,’ and proposes that ‘the courts and litigants, perhaps in conjunction with my algorithm commission, could become more proactive in identifying cases which raise systemic issues and marshalling them together in a composite procedure, by using pilot cases or group litigation techniques’ (this has echoes of the excellent work of Adam Zimmerman and Michael Sant’Ambrogio in the U.S.). Public lawyers and advocacy groups, too, have a crucial role to play here: working together to identify potentially unlawful algorithms, and to bring systemic challenges which target the algorithm itself, rather than merely its impact on a particular individual. Ultimately, effective solutions will not to be exclusively court-centric but depend on a coherent administrative justice response.
Lord Sales’ lecture is to be warmly welcomed. It is a sophisticated and accessible account of the challenges that algorithms present to law, which will have particular resonance for public lawyers. There are points of contention that may be had but public lawyers ought to take heed of Lord Sales’ key message: there is a need to start ‘engaging with the debates about the digital world now, and as a matter of urgency.’
Jack Maxwell is a Researcher at the Public Law Project.
Dr Joe Tomlinson is Senior Lecturer in Public Law at the University of York and Research Director at the Public Law Project. He has recently published Justice in the Digital State (Bristol University Press) which is available open access. He has also, along with Katy Sheridan and Dr Adam Harkens, recently published a working paper on evidence in judicial review of automated decision-making.