In a groundbreaking legal milestone that highlights Australia’s escalating battle against AI-fueled online abuse, 19-year-old William Hamish Yeates has entered a guilty plea to multiple charges related to non-consensual deepfake pornography, becoming the first person prosecuted under the country’s newly enacted national law criminalizing manipulated sexual content. The law, which targets the non-consensual creation and distribution of altered intimate imagery, carries a maximum penalty of seven years imprisonment for offenders.
During Wednesday’s court hearing, Yeates pleaded guilty to four counts: creating and altering sexual material without a victim’s consent, distributing the illicit content, and using a digital communications service in a harassing and offensive manner. Originally, the teenager faced 20 separate Commonwealth charges, but prosecutors from the Commonwealth Director of Public Prosecutions (CDPP) withdrew the remaining counts following his guilty plea. Court documents confirm that Yeates shared the manipulated intimate images of his unnamed victim across multiple accounts on X, the social media platform formerly known as Twitter, without her permission. As he left the courtroom, Yeates declined to answer questions from reporters, and he is scheduled to return for a sentencing hearing next month.
While a small number of Australian states had previously implemented their own local regulations governing deepfake content, this case marks the first prosecution brought under the unified national law, setting a critical legal precedent for future cases across the country.
Digital safety experts have long warned that non-consensual deepfake pornography, most often generated using advancing artificial intelligence technology, represents an evolving and dangerous frontier of gender-based abuse and youth bullying. Official data collected by Australia’s eSafety Commission, the country’s internet regulator, backs up these concerns. In 2024 testimony to the Australian parliament, eSafety Commissioner Julie Inman Grant highlighted the explosive growth of harmful deepfake content online, noting that the volume of explicit deepfake material has surged by 550% year-over-year since 2019.
Grant’s data also underscores the disproportionate impact of this abuse on women and girls: of all deepfake material currently circulating online, 98% is pornographic, and 99% of that explicit content targets female victims. In response to the growing crisis, the eSafety Commission has actively pushed to restrict and ban so-called “nudify” apps within Australia, tools that allow users to generate non-consensual explicit deepfakes from ordinary photos with minimal technical skill.
For individuals affected by image-based abuse or deepfake harm, support services are available through the BBC Action Line Australia.
