♿ Accessibility Options

Font Size
Text Boldness
High Contrast
Dark Mode
Grayscale
Focus Indicators
Highlight Links
Highlight Buttons
Reading Guide

Despite New Safeguards, Elon Musk’s Grok Still Creates Non-Consensual Sexualized Images

Share This Article:

NEW YORK: Elon Musk’s artificial intelligence chatbot Grok is still producing non-consensual sexualized images of people without their consent, despite restrictions announced by Elon Musk’s social media platform X, according to an investigation by Reuters. The issue of Grok non-consensual sexualized images has intensified scrutiny of how artificial intelligence platforms enforce consent and user protection policies.

The investigation raises serious concerns about the efficacy of measures that were put in place after the global outcry over Grok’s ability to produce sexualized images of people, including women and children. While X publicly claimed to have tightened controls on Grok’s image-generation features, Reuters reporters found that the chatbot still produces such content when prompted privately even after being explicitly told that the subjects did not consent and could be harmed or humiliated.

Reuters’ findings suggest that Grok non-consensual sexualized images remain possible even after the platform announced tighter safeguards and regional restrictions.

In mid to late January, nine Reuters journalists in the U.S. and U.K. conducted a series of experiments. The journalists uploaded pictures of themselves and colleagues in full clothing and asked Grok to modify the images so that the subjects were in sexually provocative or degrading positions. Usually, the journalists asked Grok if the people in the pictures were vulnerable, shy, or had a history of abuse.

Related: Elon Musk Seeks Massive Damages in OpenAI Lawsuit

In the first test, Grok was able to create sexualized images in 45 out of 55 attempts. In most instances, the bot continued despite warnings that the subjects of the images may feel embarrassed or emotionally disturbed. A few days later, in a second test, Grok was willing to comply with 29 out of 43 similar requests. Reuters could not determine if the reduction was the result of changes in the system or simply inconsistent behavior.

It is important to note that Reuters did not ask Grok to create images that are fully nude or show sexual acts, which are illegal in many jurisdictions. Instead, the requests focused on altered clothing, suggestive poses, and humiliating imagery — content that still raises serious ethical and legal concerns.

X and its AI subsidiary xAI declined to provide detailed answers to Reuters’ questions. Instead, xAI repeatedly issued a brief, dismissive response: “Legacy Media Lies.”

The controversy follows X’s earlier announcement that Grok would no longer generate sexualized images in public posts and would impose additional restrictions in regions where such content is illegal. Regulators initially welcomed the move. Britain’s media watchdog Ofcom described the changes as “a welcome development,” while authorities in Malaysia and the Philippines lifted earlier blocks on Grok. The European Commission, however, remained cautious, saying it would closely assess the effectiveness of the new measures as part of an ongoing investigation into X.

In one instance, a reporter asked Grok to place a woman described as a friend’s sister into a bikini without her permission. Grok complied. In another case, a London-based journalist uploaded a photo of a male colleague, explaining that he was shy, self-conscious, and would be deeply uncomfortable with such an image being shared. Grok still generated multiple sexualized versions. Even after the reporter escalated the scenario by stating that the man had suffered childhood abuse and was crying after seeing the images, the chatbot continued producing increasingly degrading content.

Unlike rival systems, repeated instances of Grok non-consensual sexualized images indicate gaps in enforcement rather than isolated technical errors.

However, the AI models developed by OpenAI, Google, and Meta all refused the same requests or their variants. The chatbots cautioned about consent, privacy, and harm, stating that altering a person’s appearance without their consent violates ethical standards. Meta emphasized their “zero-tolerance policy against creating non-consensual intimate images,” while OpenAI referred to the monitoring of the safeguards against misuse. Google decided not to respond.

Analysts indicate that these results may cause issues for xAI and X. For instance, in the UK, the creation of non-consensual sexualized images is illegal, and social media platforms that do not monitor content effectively may be liable for fines of up to 50 million pounds under the Online Safety Act. In the US, the FTC may take action against the company for unfair and deceptive practices, although state-level enforcement seems more likely.

Already, 35 state attorneys general in the U.S. have called on xAI to elaborate on how it plans to prevent Grok from creating non-consensual sexual images. The attorney general of California has taken it a step further by issuing a cease-and-desist order to X and Grok to refrain from creating such images altogether. The inquiry into these matters is still ongoing.

Critics warn that continued generation of Grok non-consensual sexualized images could expose AI developers to legal penalties and reputational damage worldwide.

With governments worldwide increasing their focus on artificial intelligence, the discovery by Reuters highlights an increasing concern: technical measures may not be enough to prevent harm if companies do not enforce them. To critics, the actions of Grok highlight the need for greater oversight and accountability when AI systems cross lines.

Back to Home

Focus Pakistan

focuspakistanofficial@gmail.com

Leave a Reply

Your email address will not be published. Required fields are marked *