David Tan: Treating Algorithms Like Humans – A Sketch of Rational Functionalism

[This is the second in our series of posts on all things algorithmic, AI and tech in administrative law. New content will be posted each week in September. You can view the first post here. – Eds]

The Cybermen, in the television series Doctor Who, are cyborgs who shout “Delete!” and will either kill others or convert them into cyborgs. Suppose you have the misfortune of meeting one of them. Would you pause and ponder whether shouting “Delete!” really indicates a murderous intent since Cybermen are half-machine? No, you would start running.

In this blogpost I want to sketch a similar approach for the judicial review of algorithms. Informally, an algorithm is a step-by-step procedure for achieving some goal/resolving some problem (for a nice informal introduction see this). I propose that asking whether algorithms “really” have mental states – for example in the Australian case of Pintarich – is similar to asking whether a Cyberman really intends to kill me. Instead, the appropriate inquiry is how judges should practically reason given the government does use algorithms (assuming such use is constitutional: see discussion here and here.). In many cases the algorithm functions like a human in changing rights and duties, or the algorithm forms the basis for a human’s decision. Thus, to achieve the goals of administrative law, the reasonable choice would be to treat the algorithmic “decision” as if a human had made it or as if the algorithm formed part of the human’s reasoning. This approach is what I call rational functionalism.

Two preliminary clarifications have to be made. Firstly, this is a theory of judicial review not of executive policy making; I say nothing about the types of algorithms that governments should use. Secondly, this is a normative approach; it might not be consistent with existing legal norms in your jurisdiction.

Rational Functionalism

Rational functionalism is different to, but inspired by, functionalism in the philosophy of mind. I don’t have space to trace this heritage and instead will make do with introducing its main thesis:

When trying to achieve certain goals and where two things x and y have the same function in relation to that goal, it is instrumentally rational to treat x and y similarly.

This definition already incorporates a type of practical reasoning – instrumental rationality – although readers are invited to use their preferred theory. In this paper, a choice is instrumentally rational where it maximises the chance of obtaining one’s goals. The “in relation to that goal” clause restricts the functions to relevant ones: e.g. to achieve the goal of survival, the ability of a cyberman to kill is related but not their ability to make robotic sounds. The similar treatment is to run from both dangerous cyborgs and humans, which maximises our chances of survival. For a less sci-fi scenario, chess players learn and train with chess programs all the time. This is because both human players and computer players are able to function as chess-players. Importantly, programs are better at the game than humans and thus it would be irrational to decline training with chess programs simply because they don’t really “strategise” or “plan”.

In administrative law, I propose there are two general ways to apply rational functionalism.

Treat-Like-Human: The algorithm and humans both do the same thing – change rights and duties – thus it is rational to treat them the same way to achieve administrative law goals.

Treat-Like-Reasoning: The algorithm and a human’s reasoning process both do the same thing – inform the human how a decision is to be made – thus it is rational to treat them the same way to achieve administrative law goals.

I now provide two illustrations of how to use these two applications in administrative law.

Illustration 1: Who Makes Decisions?

There is some discussion of whether an algorithm can make decisions and, relatedly, whether decisions can be delegated to them (see here and here). There are some simple cases where the decision is clearly attributable to humans. Consider the use of a calculator or an excel sheet (both of which use algorithms). No one would think a tax officer should be let off the hook because they plugged the wrong number into the calculator, or they used the wrong function in excel. In my opinion the Australian “robodebt” algorithm falls within this category: they just used the wrong mathematical function. The automated nature of the process makes little difference, the government knowinglychose for it to function that way.

Where things get complicated is with “black-box” algorithms such as neural networks (e.g. see this). While black-box algorithms can be extremely good at predicting things, it is unclear how to interpret – or extremely difficult to interpret (even by experts) – what the algorithm is doing. The problem is further confounded in cases where the decision-making process is mainly reliant on the black-box. In these cases, it is harder to attribute the decision to humans due to their ignorance of the algorithm. If the reader is happy to attribute responsibility to the government, rational functionalism still provides a guideline on assessing the legality of black-boxes (which I explain below).

Using the Treat-Like-Human application allows us to treat black-box algorithms like a delegate. The algorithm performs the same function as a human delegate would: (a) making legal categorisations or labels and (b) causing a change to rights and duties. Alternatively, the Treat-Like-Reasoning application treats the algorithm as part of the reasoning process of the human decision maker. We assume that the reasoning of the algorithm just is the reasoning of the human, since they both function the same way: they lead to a decision being made.

Both applications are instrumentally rational here to achieve the goal of keeping the government accountable. Rationality demands the availability of judicial review regardless of how the decision came about. Otherwise we attribute no legal accountability to the government who are the very ones who implemented the algorithm. At this point we have conflicting views; is the algorithm a delegate or is it just a reasoning tool of the human? Because rational functionalism focuses on rationality, the answer depends on which application better achieves the goals of administrative law. I will leave the resolution of this question for another day.

Moving to the legality of a black-box algorithm, a rational functionalist takes the human functional equivalent of the algorithm and decides if the decision is legal. The functional equivalent would be a human who made extremely accurate decisions but when asked for a reason just said “when my neurons fire in such-and-such a way it very rarely gets the answer wrong”. I suspect that under Australian law, this decision would fail on certain grounds. Firstly, if there were relevant statutory considerations then the answer does not show that they were taken into account. Secondly, the answer likely fails on the “no evidence” ground of review. Since rational functionalism treats functional equivalents alike in relation to government accountability, where the human’s decision is statutorily non-compliant, so would the use/decision of an uninterpretable algorithm.

An interesting issue arises regarding reasonableness and whether an algorithm/human needs to be interpretable to be reasonable – Krishnan, using the language of justification, proposes a reliabilist might argue predictability is sufficient. Nonetheless, I believe the two grounds above to be problematic enough for the use of black-boxes.

Illustration 2: What about Bias?

Lim argues that the ground of bias in Australia raises unique conceptual issues. In Australia, one of the conditions of bias is to have a “partial mind” and so Lim argues it is impossible for an algorithm to be biased since algorithms don’t have minds. (As an aside this proposition is not obviously true. I suspect that functionalist variants of computationalism entail that primitive mind-states are realized by algorithms.) Nevertheless there is plenty of evidence that algorithms can act in a discriminatory way without a mind (by discrimination I mean the algorithm prefers some groups to others, in a statutorily non-compliant way, whether or not intended by the programmer). Hence, we can define bias as mind-dependent whereas discrimination is consequence dependent.

The starting point is bias can be attributed to the decision maker where a decision maker is aware of the discriminatory nature of the algorithm. This is just the case of the calculator again. The problem truly arises where the decision maker is unaware of the discrimination. The Treat-Like-Reasoning application allows us to attribute the discriminatory algorithm to the human. The discriminatory algorithm and the “partial mind” function in the same way: they lead to decisions that do not comply with procedural fairness. Assuming procedural fairness is a goal of administrative law, to maximise this we should issue an appropriate legal remedy to discriminatory algorithms and partial decisions in the same way.

Conclusion

This has been a very short exploration of rational functionalism and much more work must be done to defend it.  The promise of this approach, however, is that it doesn’t require great revision to existing administrative law concepts. We just apply existing doctrines to the algorithmic functional equivalent. Reducing it to a philosophical slogan: judicial review is not about metaphysics, it’s about practical reasoning.


Dr David Tan is a Lecturer in Deakin Law School and specialises in legal philosophy and public law. (twitter: @DavidTanDW)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s