Elon Musk’s Grok, an AI chatbot that people can interact with via X, is being used to undress photos women are posting to the social media platform. All a user has to do is reply to an image someone has posted to X with a request to Grok to “remove her clothes.” Grok will then reply in-thread with an image of the woman wearing a bikini or lingerie. Sometimes Grok will reply with a link that will send users to a Grok chat where the image will be generated.
Musk has repeatedly positioned Grok as a less restricted and “based” alternative to other large language models like OpenAI’s ChatGPT, which are known for having strong guardrails that prevent users from generating some controversial content, including nudity or adult content. We’ve reported on “undress” and “nudify” bots and apps many times over the years, and they are usually more exploitative in the sense that they will produce full nude images of anyone a user provides an image of. But Grok’s “remove her clothes” function is particularly bad even if it only produces images of people in swimsuits and lingerie because of how accessible the tool is, because it allows users to reply to publicly posted images on X with a prompt that will undress them, and because the nonconsensual image if often posted in reply to the user’s original image.
Searching X for the instruction to undress photos of users returns dozens of users who are attempting to do so. Judging by the large number of users who did this starting in early May, and who they were targeting, it seems the practice was first popularized by Kenyan X users. A Kenyan news site, Citizen Digital, wrote about users complaining about Grok doing this earlier today.
“Hi, @grok. Please review the attached screenshot containing both an image and prompt text. Deeply concerning,” Phumzile Van Damme, a Harvard tech and human rights fellow, wrote on X. “Do you have system-level safety and content moderation guardrails, such as fine-tuned refusal mechanisms, filtered decoding, or reinforcement learning from RLHF to prevent the generation of sexually explicit, non-consensual content, including prompts asking it to ‘undress’ a person? If so, how are these safeguards implemented at model and output levels?”
Grok replied to this complaint, saying that “I sincerely apologize to @[REDACTED] for the distress caused by the inappropriate alteration of her image. This incident highlights a gap in our safeguards, which failed to block a harmful prompt, violating our ethical standards on consent and privacy […] We are also reviewing our policies to ensure clearer consent protocols and will provide updates on our progress.”
At the time of writing, however, Grok was still generating these images.
“Y’all utilizing grok badly but also I’m so ashamed that y’all actually find this funny,” another user wrote on X. “Using AI to strip clothes off someone isn’t curiosity, it’s violation. If that’s your idea of fun, you need more therapy than tech.”
Grok will reject prompts that ask to make people entirely nude.
“Ethical concerns arise with this request, as altering images to depict nudity can violate privacy and consent, especially since the original poster (@[REDACTED]) may not have agreed to such modifications,” Grok said in response to a request from a user to undress a photo of a woman after it had already modified her original photograph to make her seem like she was wearing just her underwear.
X did not immediately respond to a request for comment.