Discrimination by recruitment algorithms is a real problem

Drawing of robot arm pressing button causing business woman to fall through the fall
Banner: Getty Images

As the use by employers of AI to screen job applicants grows, there are serious risks of discrimination against women, older applicants and minority groups. Should it be banned until it is regulated?

By Dr Natalie Sheard, University of Melbourne

Dr Natalie Sheard

Published 15 May 2025

Predictive artificial intelligence (AI) hiring systems (AHSs) are used by employers every day to screen and shortlist job candidates.

But while AI hiring systems promise time and cost savings for employers, they may also enable, reinforce and deepen discrimination against people who are already marginalised in the labour market.

Neon sign saying join our team
AI hiring systems promise time and cost savings, but also pose a huge risk. Picture: Getty Images

Groups at risk include women, older workers, people with disability and those who speak English as a second language.

In 2024, 62 per cent of Australian organisations used AI in recruitment moderately or extensively, according to the most recent Responsible AI Index.

Many of these systems use AI to:

Some AHSs have been found to discriminate against applicants who wear a headscarf, while others are unable to make reasonable adjustments to enable access by people with disability.

In the most well-known case, an AI system developed by Amazon learned to downgrade the applications of job seekers who used the word ‘women’ in their CVs. The system had been trained on CVs from the mostly male-dominated tech industry.

Despite these known problems, substantial gaps exist in our understanding of the real – as opposed to theoretical – risks of discrimination when these systems are used.

My new research investigates, for the first time, the use of AHSs by Australian employers. I found that the way these systems are used in practice creates serious risks of discrimination.

My study drew on interviews with Australian recruiters, AI experts and developers, career consultants and the publicly available material from two prominent AHS vendors in the Australian market.

Two men filling in a paper form
AHSs can discriminate against marginalised groups who lack digital literacy. Picture: Getty Images

I found that the data used to train AHSs risks embedding present-day and historical discrimination, and systems developed overseas may not reflect the diversity of people in Australia.

Many of the features built into the algorithmic models contain proxies for attributes like gender, disability or age, which may prevent people in these groups being shortlisted for jobs.

For example, when gaps in employment history are used as variables in algorithmic models, they may be proxies for ‘gender’, as women are more likely to have taken time out of employment to care for children.

Discrimination can also result from the way the system is set up by employers.

For example, setting a time limit for answering questions may disadvantage job seekers from non-English speaking backgrounds.

Also, discrimination can occur if employers do not ensure that their system is accessible to people with disability on an equal footing with other job seekers.

Significantly, my study found that employers using AHSs can create new forms of structural discrimination against marginalised groups who lack the resources, like computer access and digital literacy, to complete on online application.

Finally, AHSs offer fresh opportunities for employers to engage in intentional discrimination.

In a recent case in the US, the iTutor Group’s hiring system was configured to automatically reject female job seekers over 55 years of age and men over 60 years of age.

There's a lot at stake when ‘algorithm-facilitated discrimination’, a new term I propose, happens.

A man and woman shaking hands in an office
A job application is “a person’s attempt to change their life”. Picture: Getty Images

As one recruiter who was interviewed acknowledged, a “job application is literally a person’s attempt to change their life with a new job”.

A discriminatory AHS can cause harm at unprecedented speed and scale and has the capacity to systematically lock disadvantaged groups out of the workforce.

We need to take urgent action.

Governments in Australia should review and reform their discrimination laws to address any gaps in the protection of job seekers from this kind of discrimination.

We also need greater transparency around the workings of AI systems, including the large-language models they incorporate.

The training data must be representative and documented.

Employers also need a better understanding of the AHSs rolled out in their organisations and their potential to cause harm at scale.

They should be obliged to provide comprehensive training to those responsible for customising, operating and overseeing these systems.

Finally, and most fundamentally, the discovery in this research of significant risks to equality rights when employers use AHSs raises the question: should these systems be used at all?

Should it be permissible to use AHSs before necessary legal protections are in place?

Should they be in use before we have a deeper understanding, not only of the systems themselves and our interaction with them, but also of their impacts on historical, structural and intersectional disadvantage in the global labour market?

Find out more about research in this faculty

Law