Politics & Society
What is the law when AI makes the ‘decisions’?
Robodebt teaches us that even simple automated decision-making systems come with the biases of the people, systems and policies that conceive them
Published 10 July 2023
Australia’s Royal Commission into the Robodebt Scheme has published its findings. And it’s a damning read.
Various unnamed individuals are referred for potential civil or criminal investigation, but its publication is a timely reminder of the potential dangers presented by automated decision-making systems, and how the best way to mitigate their risks is by instilling a strong culture of ethics and systems for accountability in our institutions.
The so-called Robodebt scheme was touted to save billions of dollars by using automation and algorithms to identify welfare fraud and overpayments.
But in the end, it serves as a salient lesson in the dangers of replacing human oversight and judgement with automated decision-making.
It reminds us that the basic method was not merely flawed but illegal; it was premised on the false belief of treating welfare recipients as cheats (rather than as society’s most vulnerable); and it lacked both transparency and oversight.
Politics & Society
What is the law when AI makes the ‘decisions’?
At the heart of the Robodebt scheme was an algorithm that cross-referenced fortnightly Centrelink payment data with annual income data provided by the Australian Tax Office (ATO). And the idea was to attempt to determine whether Centrelink payment recipients had received more payments than they should have in any given fortnight.
The result was automatic debt notices issued to people that the algorithm deemed had been overpaid by Centrelink.
As anyone who has ever worked a casual job will know, averaging a year’s worth of earnings across each fortnight is no way to accurately calculate fortnightly pay. It was this flaw that led the Federal Court to declare in 2019 that debt notices issued under the scheme were not valid.
This kind of algorithm is known as an automated decision-making (ADM) system.
ADM systems have become very common, from pre-screening algorithms that automatically analyse job applications before they are ever seen by a human, credit scoring algorithms that determine who gets a loan and recidivism prediction algorithms that are used in criminal sentencing, to name just a few.
ADM systems are known to pose serious risks – including entrenching bias, eroding privacy, and the absence of procedural fairness, transparency and contestability in decision-making.
These problems can be exacerbated when the ADM system is itself complex.
Business & Economics
Charging dead clients is dishonest. Really? Who knew
But in contrast to many modern ADM systems, the algorithm employed by Robodebt was comparatively simple. Unfortunately, this means that its harms were entirely predictable from the outset.
Examples of complex ADM systems are those that employ machine-learning models to help make their decisions.
Unlike traditional algorithms that can be thought of as a set of rules, which are intentionally designed and programmed by a human, machine-learning models are not ‘programmed’ so much as ‘learned’ by statistical inference over vast amounts of data.
This data-driven approach to algorithm design has given us the voice recognition technology that powers Siri, plus the large language models that power ChatGPT, and much besides.
The downside of this approach is that algorithms developed this way tend to behave far less predictably than those that are human programmed. They also tend to inherit the natural biases that are present in the data from which they are derived.
This is the reason that popular AI image generation tools exacerbate existing gender biases, for example.
You might imagine that only algorithms developed using data-driven approaches inherit the biases of their society. But page 28 in Chapter Three of the Royal Commission’s report makes clear that the Robodebt scheme was a product of a political culture in which the Minister for Social Services, former Prime Minister Scott Morrison, was perceived as a “welfare cop”, “ensuring welfare integrity” from people who were “rorting the system”.
Health & Medicine
Medibank’s hack tells us privacy laws need to change
So, underpinning the Robodebt scheme was an implicit bias that viewed welfare recipients as potential cheats and saw social services as a budget area to extract savings, rather than one in which it might invest to improve the lives of the most disadvantaged.
Robodebt teaches us that even simple ADM systems can encode the prejudices of the individuals and systems who conceive them.
Robodebt was unlawful because it relied on income averaging, and also because of its disregard for rule of law principles requiring procedural fairness and contestability.
To successfully challenge a Robodebt required somebody, for instance, to be able to produce their payslips for the period in question, without which they have no evidence to argue that the debt was invalid.
This meant that for many people, Robodebts were effectively uncontestable.
Moreover, the scheme reversed the burden of proof onto the accused, to prove that they had not incorrectly stated their income to Centrelink.
But even beyond these fundamental flaws, Robodebt also reminds us that the use of ADM systems for purported “cost savings” cannot justify removing human oversight from the picture altogether.
In a nutshell, Robodebt’s woes lie in the fact that the ‘machine’ had effectively subsumed parts of the process traditionally conducted by a human:
“There was no meaningful human intervention in the calculation and notification of debts under the OCI [Online Compliance Intervention] phase of the Scheme. This meant that debts being raised on incorrect data – or incorrectly applied data – were issued with no review.” (page 477).
Sciences & Technology
TikTok captures your face
Chapter 17 details the role of a ‘compliance officer’ whose role is to investigate cases, check for discrepancies, and contact individuals involved, amongst others.
“Automating the human out” effectively denies affected people any form of recourse or understanding of how and why an adverse decision has been made against them.
This marks a clear departure from the Commonwealth Ombudsman’s frequently-updated Automated Decision-making Better Practice Guide which stipulates best practice guidelines for “expert systems”, and the Australian AI Ethics Principles which aim to “reduce the risk of negative impact on those affected by AI applications”.
These failings ended up costing $A565.195 million. Rather than saving the government money, Robodebt ultimately cost it half a billion Australian dollars.
But this was only because affected individuals and their advocates fought for far too many years for justice over wrongs that would never have occurred had the system been designed to uphold the law in the first place – with much higher levels of transparency and oversight.
The Royal Commission’s report makes clear that the Robodebt system was also sustained because of poor culture, including lack of transparency and accountability and (as the report memorably notes) “venality, incompetence and cowardice”.
This incompetence includes government ministers failing to adequately enquire about the scheme’s legality or knowingly misleading their colleagues or the public about the scheme.
Sciences & Technology
Challenging decisions made by algorithm
This also includes one Commonwealth department failing to adequately disclose information about the scheme to the Ombudsman who was investigating it.
The best way to mitigate the harms posed by ADM systems like Robodebt in future is likely to be by strengthening transparency, accountability and ethics within our institutions, including supporting a frank and fearless public service.
Indeed, these steps are central to the Commission’s recommendations.
Unfortunately, Robodebt is hardly the first example of an ADM system that ruined the lives of vulnerable people by wrongfully labelling them as frauds or cheats, and is unlikely to be the last.
With the conclusion of the Royal Commission report, we hope that its victims can now move on.
In the meantime, Robodebt leaves many lessons to be learned, especially when it comes to the government’s current consultation on Safe and Responsible AI use.
Robodebt reminds us that even simple ADM systems, that don’t harness big data, machine learning and or generative AI, can be responsible for large-scale harms, especially when deployed without adequate oversight by agencies lacking appropriate cultures of transparency and accountability.
Ultimately, human rule of law values are critical in governing decision-making machines.
Banner: AAP
We acknowledge Aboriginal and Torres Strait Islander people as the Traditional Owners of the unceded lands on which we work, learn and live. We pay respect to Elders past, present and future, and acknowledge the importance of Indigenous knowledge in the Academy.
Read about our Indigenous priorities