- The Sizzle
- Posts
- Elon Musk's AI chatbot Grok reported to eSafety for making deepfake sexual images
Elon Musk's AI chatbot Grok reported to eSafety for making deepfake sexual images
Australia's eSafety Commissioner is assessing "several" recent image-based abuse complaints involving Grok. If they meet the standard, there are some powers that could be used against Musk's company.

Australia’s favourite source of technology news, culture and deals

Elon Musk’s AI chatbot Grok has been used to create non-consensual sexualised images of people, according to several complaints being assessed by Australia’s internet regulator.
The xAI chatbot has been on a “mass undressing spree” over the past few weeks, generating versions of images that depict people without clothing or in suggestive poses when prompted by users on X (formerly Twitter).
The lack of guardrails preventing Grok from creating intimate and sexualised images of anyone has caught the attention of Australia’s eSafety Commissioner Julie Inman Grant, who has novel powers that could be used to fine and seek to legally force the Elon Musk-led company to shut down this feature.
In the past few weeks, Grok has created tens of thousands of manipulated images of people, predominantly women, in response to users' requests.
Grok’s ability to generate sexualised, even explicit, material isn’t new. In August, xAI released a “spicy” setting for the chatbot app’s video generation which would create nude deepfake content of people.
But it was Grok’s integration with X — which allows anyone to prompt the chatbot from within the social network app and post its response publicly— that is being used to create the images in public on a platform used by hundreds of million of people each month.
This was popularised in the last few weeks by online adult content creators who used Grok to publicly undress images of themselves as a form of self-promotion.

an example of a creator asking Grok to generate an image of her
This use case spread to other users who prompted Grok to undress or create sexualised versions of pictures that had been posted by other users.
The Musk-owned chatbot manipulated images to place their subjects in bikinis, see-through clothing or making other edits including simulating semen on them. However, Grok did not create fully naked images of people.
The subjects of these depictions included people who had uploaded pictures of themselves, people who’d uploaded images of other people, celebrities and public figures, even children.
In one example archived by another user, Grok created an image that removed the clothes of a 14-year-old television star. In another example, an alleged mother of one of Musk’s children claimed that Grok was used to create a sexualised version of an underage image of her.
Several days after complaints about this non-consensual sexualised image generation began circulating, X’s Safety account posted that “we take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.”
Grok now faces scrutiny over the scandal by regulators from countries including India, the UK and Malaysia— and now Australia.
A spokesperson for eSafety told me that it has received “several reports” about Grok being used to make sexualised images without consent, including a complaint of “potential child sexual exploitation material”.
eSafety is still assessing the adult reports, but has determined that the potential child sexual exploitation material in the complaints did not meet a legal threshold for “class 1 material” that would allow the agency to force X to take it down. (The definition of child sexual exploitation material differs between civil standards, which matters for these takedown powers, and criminal prosecution).
“As a result, eSafety did not issue removal notices or take enforcement action in relation to those specific complaints,” they said in an email.
If the complaints about adult image-based abuse are found to be legitimate, the eSafety Commissioner can order X to remove the image in as little as 24 hours. Failing to comply would mean fines and other consequences.
But, an eSafety spokesperson said, the individual reports aren’t the only potential legal consequences for Grok under Australia’s internet regulations.
The eSafety spokesperson also cited Australia’s online industry codes and standards, a set of legally binding regulations requiring online providers like X and xAI to have systems to prevent the creation and distribution of child sexual exploitation material and other illegal content.
These currently explicitly require AI chatbots to take steps to stop the creation of deepfake child sexual exploitation material. Material that fits this definition includes a “depiction that appears intended to debase or abuse the person depicted for the enjoyment of readers or viewers” and “is not limited to sexual intercourse”.
The eSafety Commissioner can apply for fines of $990,000 for failing to meet this industry code if the company doesn’t have adequate systems and technologies preventing the production of child sexual exploitation material.
In the case of repeated failures in a 12 month period, the eSafety Commissioner can apply for legal orders to force the company to comply, to take remedial action and, in severe cases, even seek a court order to shut down the services.
The eSafety Commissioner has not taken any action against X for failing to meet these industry standards.
The two organisations have been frequent legal opponents, including a previous stoush over X’s refusal to remove video of the 2024 Wakeley church stabbing. The regulator initially deemed the video class 1 material but ultimately abandoned proceedings to force the social media platform to do so.
eSafety has been active in cracking down on non-consensual sexual imagery generators (sometimes called “nudify apps”), including prodding a major global app provider to block Australian users.
“eSafety remains concerned about the increasing use of generative AI to sexualise or exploit people, particularly where children are involved,” a spokesperson said.
Despite this, Musk’s Grok has not faced any consequences in Australia despite being far more prominent than any other nudify application and the well-publicised nature of its sexual image generation features.
The Grok app can be downloaded by users as young as 13 in both Apple and Google’s app stores. Musk even has plans to launch a version of his AI chatbot for kids called ‘Baby Grok’.
Reply