- The Sizzle
- Posts
- eSafety demands Elon Musk's X explain anti-CSAM safeguards amid AI deepfake scandal
eSafety demands Elon Musk's X explain anti-CSAM safeguards amid AI deepfake scandal
X faces fines and even a forced shutdown if has been breaching Australian rules that require online providers to prevent and detect child sexual exploitation material


Australia’s online safety regulator is stepping up pressure on Elon Musk’s X over its global AI deepfakes scandal.
The office of the eSafety Commissioner is demanding the tech company explain what it’s doing to deal with child sexual exploitation material, after reports that its AI chatbot Grok is being used to create sexualised, sometimes criminal, images of kids.
eSafety has now joined several other countries’ regulators who are seeking information and threatening further penalties over the misuse of its AI image and video generation feature.
xAI, which owns the social media platform X and the AI chatbot Grok, is currently embroiled in two related controversies.
On X, there have been tens of thousands of non-consensual, sexualised deepfakesx made of people without their consent over the past few weeks.
People prompted Grok, which is integrated into X, to generate versions of images that depict women in bikinis, underwear, or other revealing and suggestive poses (although it would not generate naked depictions of people). This has been used against other people on X, random women, public figures, and children. Yesterday, it was used to make a bikini image of the body of a woman killed by a United States ICE officer. Today, Grok was stopped from generating images on the social media platform when prompted by free users.
But over on Grok’s standalone platform — which can be accessed via the web or an app — there’s no such restriction. And this version of Grok will go further and let people generate explicit, nude versions of images of others.
This week, charity group the Internet Watch Foundation reported that people on the dark web claimed they had used Grok to make topless and sexualised images of teen girls as young as 11 years old.
A eSafety spokesperson said that it is working with “international child protection organisations and online safety regulators, who have identified similar emerging patterns involving generative AI tools, including material appearing on mainstream platforms and dark web forums.”
eSafety’s request for information is part of its investigation into whether there’s been a breach of the industry codes and standards, a set of Australian internet regulations established in the Online Safety Act for online providers like X and Grok. One of these obligations is a requirement to detect and remove child sexual exploitation material.
If found to have broken these rules, Musk’s X could face consequences that go beyond just fines.
The rest of this is for Sizzle subscribers
You can sign up for a free two week trial (no credit card required) or a subscription by clicking this button.
Already a paying subscriber? Sign In.
Reply