Terry Carney: AI, the Rule of Law and the Digital Welfare State

[This is the first post in a symposium on all things algorithmic, AI and tech in administrative law. New content will be posted each week in September. – Eds]

Australia’s roll-out of artificial intelligence in income security administration took a massive credibility hit in November 2019.  On 25 November the Federal Court invalidated 470,000 alleged social security debts, impacting 373,000 individuals and totalling $721 million.  These so-called ‘robo-debts’ were based on the faulty assumption that tax office data on average earnings over 6-12 months yielded accurate information about the casual/intermittent earnings in relevant social security payment fortnights.  And they unlawfully shifted the onus of proof from the state onto alleged ‘debtors’.

Faced with a class action seeking interest on unlawfully collected debts along with compensation for harm and suffering, the Government was still stonewalling in July 2020 rather than answering Parliamentary Committee questions. Those unanswered questions concern what went wrong with the Government’s 2015 debt collection design and due diligence, whether legal advice was obtained and whether internal risk assessment warnings were ignored.  The then UN Special Rapporteur on Extreme Poverty and Human Rights cited this debacle as a harbinger of the risk of a dystopian form of digital welfare state.

So what are the lessons for public law administration, and is AI in welfare necessarily dystopian?

Lessons for public law

The implications of AI for the rule of law have generated a large body of international scholarship.  In Australia, these implications are well sketched in a recent paper by Yee-Fui Ng and her co-authors, who canvas considerations of procedural fairness, transparency of decision-making, privacy and equality.  These are indeed critical domains for AI in welfare, but the vulnerability of welfare clients raise some neglected issues around speedy access to remedies and the risk of compounding inequality due to the ‘digital divide’, as now outlined.

The obvious first question for public law from the robo-debt saga is the remedial one.  Why did it take three and a half years from the first exposure in July 2017 of the fatal legal and operational flaws in this bungled AI initiative to bring it to an end?  Ultimately the rule of law did prevail in November 2019.  But first tier adverse administrative merits review rulings of the Administrative Appeals Tribunal were kept from the public domain as a result of the Government not seeking further merits review in the General Division (where rulings would be public). To similar effect, the first test case in the Federal Court was aborted by a late Government concession that there was now ‘no debt’.

There is nothing new about government relying on its ability to string out being brought to account by deploying litigational and other stratagems available to powerful state actors, or in governments adopting the tactic of administrative non-acquiescence to limit adverse rulings to the case at hand, while continuing to apply the overruled policy to everyone else.  But it is certainly unethical and breaches long-standing common law injunctions for the state to act as a model litigant.  The inexcusable lack of robustness of accountability bodies such as the office of the Ombudsman (in twice failing to ask the ‘legality question’ of government) also raises questions about the fitness for purpose of other checks on executive power.

A more chilling lesson from robo-debt may be its exposure of the fallacy of placing too much store in the disinfectant quality for public policy debate and rule of law protection from transparencyof AI algorithms.  Transparency is much touted if not universally accepted as a key ingredient of public law principles, including under European General Data Protection Rules ‘GDPR’.  However in the case of robo-debt the AI error was fully transparent from the very outset – the basic arithmetic mistake of thinking that an average speaks to its constituent parts – as too was the legal error of reversing the state’s onus of proof of the supposed debts.

Nor should it be assumed that restoring the ‘human in the loop’ of AI decision-making – that robo-debt removed – is any guarantee of sound decision-making.  There is a very real risk of human deferral to the presumed superiority of the algorithm resulting in rubber-stamping of the AI outcome.  So no comfort can be drawn from the majority reasoning in the full bench of the Federal Court in Pintarichthat under Australian law an administrative ‘decision’ required an ultimate human element.  It is of no comfort not only because, as just discussed, this may prove to be a window dressing veneer which leaves the AI outcome unchanged. More fundamentally, that ruling may preventAI from being amenable to judicial review – because if a pure AI outcome is not a ‘decision’, judicial review is unavailable in Australia.

Lessons for AI in welfare

It appears trite to have to say so, but citizens on welfare are vulnerable on many counts.  They risk poverty should their already austere income safety-net be breached and thus are reliant on keeping government on side.  As victims of misfortune or the workings of markets, they often display a higher incidence of markers of persistent disadvantage.  These may include low educational status, compromised language proficiency, locational disadvantage, technological deficits (lack of or difficulty using devices), mental health issues, disability or chronic health problems, and having access to fewer ‘social assets’ in the form of well-informed family, friends or civil society agencies (voluntary organisations or non-government services).

Surprisingly – at least until regard is had to the architecture of its tax-funded, residual and heavily means-tested welfare state – vulnerability features little in either Australian social security law or its administration; an issue for another day.  But there is surely no excuse to overlook the ‘digital divide’ when designing AI elements of social security administration.  As Sofia Ranchordás, observes, ‘digital exclusion reproduces existing socioeconomic cleavages, biases, and other forms of discrimination’.

Yet sadly the digital divide is still a neglected component of Australia’s more recent AI initiatives such as ParentsNext for at risk sole parents and the New Employment Services Trialpilot sites for a proposed July 2022 national roll-out.  Both of these programs rely heavily on smartphone uploading of compliance activities. Should individuals fail to proactively upload their compliance activities they automatically register default points towards triggering a sanction of reduction or loss of payments.  Given the manifold problems of inability to afford a smartphone, pay for plans, service drop-out or black spots, and user-related barriers to use – the concern is that people are inappropriately shifted to the technological innovation when they should be kept on a manual or less sophisticated reporting mode that carries less risk of a compliance failure.

Conclusion

The Australian experience with misfiring AI initiatives in welfare is not unique, as demonstrated by overseas experience in the USAand theUK.  AI in welfare is not necessarily all dystopian doom and gloom, however.  Technology is technology and AI is just the latest.  AI can be written to conform to rule of law precepts; it can be designed to advance the interests of the vulnerable, such as by automatically contacting people to let them know they are eligible for assistance (just as existing robo-debt customers are being automatically repaid); and it can be part of efficient and ethical welfare administration.

In theory this is what the Australian Government now claims to be committed to achieving, according to the July 2020 statement of the Minister for Government Services.  What has been missing so far however has been the ‘agile co-design’ process integral to any realisation of that goal.  Until it is delivered, Australia’s digital welfare state risks remaining overly dystopian.


Terry Carney is Emeritus Professor of Law at the University of Sydney Law School.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s