- The Sizzle
- Posts
- Elon Musk's AI chatbot Grok reported to eSafety for making deepfake sexual images
Elon Musk's AI chatbot Grok reported to eSafety for making deepfake sexual images
Australia's eSafety Commissioner is assessing "several" recent image-based abuse complaints involving Grok. If they meet the standard, there are some powers that could be used against Musk's company.

Australia’s favourite source of technology news, culture and deals

Elon Musk’s AI chatbot Grok has been used to create non-consensual sexualised images of people, according to several complaints being assessed by Australia’s internet regulator.
The xAI chatbot has been on a “mass undressing spree” over the past few weeks, generating versions of images that depict people without clothing or in suggestive poses when prompted by users on X (formerly Twitter).
The lack of guardrails preventing Grok from creating intimate and sexualised images of anyone has caught the attention of Australia’s eSafety Commissioner Julie Inman Grant, who has novel powers that could be used to fine and seek to legally force the Elon Musk-led company to shut down this feature.
In the past few weeks, Grok has created tens of thousands of manipulated images of people, predominantly women, in response to users' requests.
Grok’s ability to generate sexualised, even explicit, material isn’t new. In August, xAI released a “spicy” setting for the chatbot app’s video generation which would create nude deepfake content of people.
But it was Grok’s integration with X — which allows anyone to prompt the chatbot from within the social network app and post its response publicly— that is being used to create the images in public on a platform used by hundreds of million of people each month.
This was popularised in the last few weeks by online adult content creators who used Grok to publicly undress images of themselves as a form of self-promotion.

an example of a creator asking Grok to generate an image of her
This use case spread to other users who prompted Grok to undress or create sexualised versions of pictures that had been posted by other users.
The Musk-owned chatbot manipulated images to place their subjects in bikinis, see-through clothing or making other edits including simulating semen on them. However, Grok did not create fully naked images of people.
The subjects of these depictions included people who had uploaded pictures of themselves, people who’d uploaded images of other people, celebrities and public figures, even children.
In one example archived by another user, Grok created an image that removed the clothes of a 14-year-old television star. In another example, an alleged mother of one of Musk’s children claimed that Grok was used to create a sexualised version of an underage image of her.
Several days after complaints about this non-consensual sexualised image generation began circulating, X’s Safety account posted that “we take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.”
Grok now faces scrutiny over the scandal by regulators from countries including India, the UK and Malaysia— and now Australia.
A spokesperson for eSafety told me that it has received “several reports” about Grok being used to make sexualised images without consent, including a complaint of “potential child sexual exploitation material”.
eSafety is still assessing the adult reports, but has determined that the potential child sexual exploitation material in the complaints did not meet a legal threshold for “class 1 material” that would allow the agency to force X to take it down. (The definition of child sexual exploitation material differs between civil standards, which matters for these takedown powers, and criminal prosecution).
“As a result, eSafety did not issue removal notices or take enforcement action in relation to those specific complaints,” they said in an email.
If the complaints about adult image-based abuse are found to be legitimate, the eSafety Commissioner can order X to remove the image in as little as 24 hours. Failing to comply would mean fines and other consequences.
But, an eSafety spokesperson said, the individual reports aren’t the only potential legal consequences for Grok under Australia’s internet regulations.
The rest of this is for Sizzle subscribers
You can sign up for a free two week trial (no credit card required) or a subscription by clicking this button.
Already a paying subscriber? Sign In.
Reply