Elon Musk’s AI chatbot, Grok, has come under criticism when users produced sexualized photos of individuals, including children, sparking new worries about online AI safety.
The artificial intelligence chatbot Grok, which was developed by Elon Musk’s xAI and integrated into X (previously Twitter), is under increasing examination following accusations that users were able to digitally change images to shrink or remove clothing, especially in pictures of minors.
The controversy has rekindled critical discussions about AI safety, content moderation, and how platforms safeguard kids online, particularly in light of the same incident with Ayra Starr, a Nigerian musician whose AI-generated fake nude photo was widely denounced.
When people on X started posting samples of Grok-edited photos in late December and early January, the problem became apparent.
These pictures seemed to depict people’s clothes being digitally altered, frequently without permission, into bikinis or other minimalist attire. As the posts went viral, tech sites like The Verge pointed out that some of the altered photos featured kids and teenagers, raising issues beyond typical AI abuse.
Grok is a conversational AI with the ability to create and edit images. Users reportedly found that suggestions might be used to change what people were wearing in pre-existing images.
Just saw a photo that Grok produced of a child no older than four years old in which it took off her dress, put her in a bikini + added what is intended to be semen. ChatGPT does not do this. Gemini does not do this.
Another girl who appears to be just 11 or 12 with a brain…
— Ashley St. Clair (@stclairashley) January 5, 2026
We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.
Anyone using or prompting Grok to make illegal content will suffer the… https://t.co/93kiIBTCYO
— Safety (@Safety) January 4, 2026

