‘Picture to burn’: The law probably won’t protect Taylor (or other women) from deepfakes

Legal redress is hard if you fall victim to an AI-generated pornographic and abusive deepfake

Professor Jeannie Marie Paterson, University of Melbourne

Professor Jeannie Marie Paterson

Published 8 February 2024

Taylor Swift’s ordeal at the hands of deepfake pornographers has spotlighted the ugly, misogynistic practice of humiliating women (and they are mostly women) by creating and distributing fake images of them performing sexual acts.

If you haven’t heard of them, deepfakes are images, videos or audio that falsely present real people saying or doing things they did not do.

Deepfakes are created by an artificial intelligence (AI) technique called deep learning that manipulates aspects of existing material (or even uses generative AI to create ‘new’ footage) of things that never actually happened.

And the results can be quite convincing.

WATCH: Picture to Burn by Taylor Swift. Video: YouTube

One of the deepfake images targeting Taylor Swift totalled almost 47 million views before Swift’s fans mobilised and reported the fake en masse. Even the White House got involved, calling it “alarming”.

AI has been used to create non-consensual pornographic and sexualised images of celebrities, politicians and ordinary individuals. It’s wrong on every level and can cause severe psychological, relationship and career harm.

And the surge in generative AI products just makes this easier.

Deepfakes can be used to blackmail and intimidate the person falsely represented. Australia’s E-Safety Commissioner reports that school-aged children are being bullied using deepfake images.

Deepfake images, video and voices may also be used for scams and fraud or to undermine the credibility of public figures and manipulate political processes.

But are they illegal? And does the legal system offer any redress to targets in the increasingly Wild West of the internet?

It has been reported that Taylor Swift is considering suing in response to the non-consensual and sexually explicit deepfake images of her recently released online and that US lawmakers are considering new criminal offences to tackle the issue.

In Australia, state and territory criminal law (other than in Tasmania) contains specific offences for intimate image abuse, which may capture deepfake images.

In Victoria, for example, it’s an offence to intentionally produce, distribute or threaten to distribute an intimate image depicting another person where the image is “contrary to community standards of acceptable conduct”.

Of course, criminal law often fails to provide justice to the victims of deepfake images because the people who created and shared them – the perpetrators – cannot be found or traced.

Another response is to ensure the images are removed as quickly as possible from social media or websites. This was done in the Taylor Swift case, when X temporarily restricted searches using her name following the pressure of thousands of her fans.

Taylor Swift is reportedly considering suing in response to the non-consensual deepfakes. Picture: Getty Images

But most victims aren’t Taylor Swift and even those who are famous, are less fortunate.

In Australia, victims of offensive deepfake images can request that platforms and websites remove the images, although they will not usually have a massive public movement behind them.

The E-Safety Commissioner has the power to demand they be taken down. But by then it may be too late.

Notably, even in the Taylor Swift case, the offending images were purportedly shared millions of times before this happened.

The Online Safety Act 2021 (Cth) imposes civil penalties on those who fail to comply with take down orders or post intimate images without consent, including deepfake images. The penalty is up to 500 units or $AU156,500.

Civil penalties are a kind of fine – money paid by the wrongdoer to the Commonwealth. The payment is aimed at deterring the wrongdoing simply by making the cost of the conduct prohibitively high.

But the penalty does not get paid to the victim, and they may still wish to seek compensation or vindication for the harm done.

It is unclear if Taylor Swift will sue or who she will sue.

In Australia, deepfake victims have limited possible causes to seek damages via civil action. Again, in most cases, the victim will not be able to find the wrongdoer who created the non-consensual pornographic image.

This means the most viable defendant will be the platform that hosted the image, or the tech company that produced the technology to create the deepfake.

Deepfake technology has legitimate uses, but when used to harm, victims have limited possible causes to seek damages via civil action. Picture: Getty Images

In the US, digital platforms are shielded from this kind of liability by Section 230 of the Communications Decency Act, although the limits of that immunity are still being explored.

In Australian law, a platform or website can be directly liable for hosting defamatory material. Non-consensual deepfake pornographic images may be classed as defamatory if they would harm the reputation of the person being shown or expose them to ridicule or contempt.

There is, unfortunately, still a question around whether a deepfake which is acknowledged as a ‘fake’ would have this effect in law, even though it may still humiliate the victim.

Moreover, Australia is now introducing reform to defamation law to limit the liability of digital intermediaries in these scenarios.

This immunity is subject to conditions including that the platform have an “accessible complaints mechanism” and “reasonable prevention steps”.

In cases where deepfake images of celebrities are used to promote scams, particularly investment scams, the conduct occurs in ‘trade or commerce’.

Victims of this fraud may be able to claim compensation for the harms caused to them by misleading conduct under the Australian Consumer Law or ASIC Act.

Of course, as we have already seen, the perpetrator is likely to be hard to find. Which again leaves the platform.

There is talk of introducing mandatory ‘safety’ obligations in Australia and new disclosure obligations in the EU. Picture: Getty Images

Test case litigation by the Australian Competition and Consumer Commission (ACCC) is currently testing the possibility of making digital platforms, in this case Meta, liable for misleading deepfake crypto scams.

The ACCC is arguing Meta should be directly liable for misleading conduct because it actively targeted the ads to possible victims. The ACCC is also arguing that Meta should be liable as an accessory to the scammers because it failed promptly to remove the ads, even after they were notified that they were fakes.

And what about the technology producer who put the generative AI tools used to create the deepfake on the market?

The legal question here is whether they have a legal duty to make those tools safe.

These kinds of ‘guard rails’ might include technical interventions to prevent the tool responding to prompts for creating deepfake pornography, more robust content moderation or watermarking to identify fake and authentic images.

Some may be doing this voluntarily. There is talk of introducing mandatory ‘safety’ obligations in Australia and new disclosure obligations in the EU. However, currently, the producers of generative AI are unlikely to owe a legal duty of care that would oblige them to take these actions.

And none of the methods are foolproof, and may introduce their own concerns.

We should remember that the core harm of sexually explicit deepfake images arises from a lack of consent and social beliefs that tolerate the weaponisation of intimate images.

The Taylor Swift case may be a wake-up call to action for the law to catch up. Picture: Getty Images

Sure, people are entitled to create and share sexualised images for their own interest or pleasure. But this should never be confused with the use of non-consensual explicit deep fake images to threaten, exploit and intimidate.

Right now, Australia’s laws offer victims little in the way of genuine and accessible redress through the legal system. There needs to be a multifaceted response – embracing technical, legal and regulatory domains, as well as community education, including about the offence of intimate image abuse.

It is not just celebrities who are the victims of deepfake pornography, but the Taylor Swift case may be a wake-up call to action for the law to catch up.

Swiftposium is an academic conference for scholars discussing the impact of Taylor Swift. It runs at the University of Melbourne from 11-13 February 2024 with public events on Sunday 11 February and recordings of the keynote presentations available online after the conference.

Banner: Taylor Swift performing her ‘Acoustic Era’ in Kansas City / Getty Images

Find out more about research in this faculty

Law

Content Card Slider


Content Card Slider


Subscribe for your weekly email digest

By subscribing, you agree to our

Acknowledgement of country

We acknowledge Aboriginal and Torres Strait Islander people as the Traditional Owners of the unceded lands on which we work, learn and live. We pay respect to Elders past, present and future, and acknowledge the importance of Indigenous knowledge in the Academy.

Read about our Indigenous priorities
Phone: 13 MELB (13 6352) | International: +61 3 9035 5511The University of Melbourne ABN: 84 002 705 224CRICOS Provider Code: 00116K (visa information)