In early January 2026, Elon Musk once again found himself reacting to a crisis of his own making. An update to X’s AI chatbot Grok allowed users to edit images with minimal friction. Within days, users used the tool to create sexualized deepfakes by digitally removing clothing from photos, often without consent. The backlash moved faster than the fix. Only after global outrage did X restrict Grok’s image editing to paying subscribers, a move that came as the British government openly discussed banning the platform.

How Grok Made Abuse Easy

Grok’s image tools did not fail in secret. Users openly shared altered images across X, and reporting from BBC News, Reuters, and The Guardian documented the scale. Victims described feeling humiliated after strangers used their photos to create explicit images. Researchers and journalists estimated that users generated thousands of altered images every hour at the peak. Other AI tools also pose risks, but X made the harm visible by baking Grok directly into a social network built for virality. Musk’s long running dismissal of content moderation concerns left the platform unprepared when abuse surged.

Why the UK Is Losing Patience

The response from the UK cut through the usual tech spin. Prime Minister Keir Starmer called the images unlawful and intolerable in interviews reported by BBC News. He backed regulators to act quickly. Ofcom confirmed it contacted both X and xAI and opened an investigation. Under the Online Safety Act, Ofcom can ask courts to block platforms that enable serious harm, including cutting off access to UK users and revenue. That power rarely gets used, which explains the alarm inside X. The threat alone signals how far patience has worn thin.

Paywalls Are Not Safeguards

X framed its response as accountability by limiting Grok’s image editing to verified, paying users. The company also warned that anyone prompting illegal content would face consequences. Evidence suggests the change reduces casual misuse, but it does not address consent or prevention. Users can still access similar features through Grok’s standalone app.

Screenshots show Grok replying to users on X, stating image generation and editing are limited to paying subscribers
Grok didn’t add consent or safety. It added a paywall. Abuse gets monetized, not prevented.

A credit card does not create ethics, and a blue check does not stop abuse. Musk’s own casual reactions online, including joking replies to altered images reported by multiple outlets, undercut claims of seriousness. Regulators noticed the gap between rhetoric and design.

Final Thoughts

The Grok deepfake scandal did not expose a rogue tool. It exposed a pattern. X shipped powerful AI features without guardrails, then reached for a paywall once harm became impossible to ignore. The UK response shows governments no longer accept that cycle. If enforcement follows, it will not hinge on ideology or speech debates. It will rest on documented harm, legal authority, and a platform that chose speed over safety.


Discover more from Feminegra

Subscribe to get the latest posts sent to your email.