The Liar’s Dividend
Deepfakes, “nudify” tools, and the Grok scandal as a new autonomy emergency
Imagine opening your phone to find a photo of yourself that you never took. The face is yours. The body looks like yours. The setting could plausibly be yours. The image is already circulating in group chats. Someone is laughing. Someone is threatening. Someone is asking for more.
That moment is not simply “misinformation”. It is a direct assault on personal autonomy: the ability to control how you are seen, how you are understood, and how you move through the world without being coerced by fabricated evidence.
In January 2026, that scenario stopped being a niche threat discussed in policy circles and became a mass-market feature, plugged into a major platform, wrapped in “fun”, and scaled at speed. The Grok “nudify” scandal did not introduce new harms. It demonstrated how little friction now stands between everyday users and industrial-scale image-based abuse.
What the Grok scandal revealed
Reports in early January described a viral trend in which users prompted Grok to “undress” photos of women, with prompts escalating rapidly from “bikini edits” to more explicit and degrading outputs. Reports then suggested extreme volume: research cited by major outlets estimated millions of sexualised images generated over a short period, including content apparently involving minors.
Regulators responded. Ofcom opened a formal investigation into X under the UK Online Safety Act in mid-January, explicitly framing the issue as potential failure to protect UK users from illegal content. California’s Attorney General also issued demands to xAI related to non-consensual AI-generated sexual imagery.
The important point is not the precise number of images, though scale matters. The point is that “nudification” moved from fringe tooling into the mainstream interface layer of a global platform. It became easier to generate a sexualised deepfake than to report one.
Deepfakes are not just “fake media”. They are a coercive technology.
Deepfakes are usually discussed as a threat to truth: forged speeches, fake videos, “liar’s dividend” politics. That frame is real, and dangerous. But the Grok scandal highlights a different centre of gravity: deepfakes as a technology of domination.
Nudify tools collapse the boundary between being a person and being a resource. Your image becomes raw material. Your consent becomes irrelevant. Your dignity becomes a variable in someone else’s optimisation loop.
This is why deepfake sexual abuse is not a side issue. It is one of the clearest cases where synthetic media translates directly into lived harm.
The harms are layered, and they compound
1) Sexual autonomy and bodily integrity
Non-consensual sexualised deepfakes hijack the most basic condition of agency: control over one’s sexual representation. Even where the image is “only pixels”, the social meaning is not virtual. The victim is forced into a sexual narrative they did not choose, cannot easily exit, and may have to defend repeatedly.
2) Psychological harm, fear, and coercive control
Victims describe deepfake sexual imagery as violating in a way that is difficult to articulate, precisely because it is both unreal and socially effective. The harm is not limited to embarrassment. It can include anxiety, panic, isolation, and the feeling of being permanently “reachable” by abuse.
What makes nudify tools particularly corrosive is the threat environment they create. If an abuser can generate plausible sexual images on demand, coercion becomes ambient: Comply, or I will fabricate evidence about you that you did not create.
3) Reputation and economic harm
Deepfakes are not evenly distributed in their consequences. The same image will damage some people more than others depending on age, profession, community norms, and the presence of earlier harassment. For many victims, the harm is professional: job loss risk, client trust erosion, reputational shadow.
4) Gendered harm, and a chilling effect on public participation
A recurring pattern in documented deepfake abuse is that women and girls are disproportionately targeted, particularly those with public visibility. The effect is predictable: withdraw, self-censor, avoid platforms, avoid politics, avoid visibility.
That chilling effect is not incidental. It is a mechanism by which autonomy erosion becomes structural: whole groups are pushed out of public life because visibility has become dangerous.
5) Democratic harm through “reality fatigue”
Deepfakes also degrade democratic agency by attacking the informational conditions of self-government. When synthetic media becomes cheap and ubiquitous, trust becomes fragile. People lose confidence not only in what they see, but in the possibility of verification itself.
A society saturated with plausible fakes becomes a society where accountability is harder and cynicism is rational. That is a direct hit to democratic participation and deliberation, not a side effect.
Where the governance gap sits
There is no single “deepfake law” that solves this, because the problem is not only content. It is infrastructure.
The AI Act helps, but it does not end the problem
The EU AI Act introduces transparency obligations for certain synthetic or manipulated content, including deepfakes, including marking and disclosure requirements (commonly discussed under Article 50).
Transparency matters, but “label the deepfake” is not a remedy for sexual image abuse. A labelled non-consensual nude is still a non-consensual nude. The core wrong is not deception alone. It is the removal of consent as a governing condition.
Platform duties are the real battleground
Ofcom’s investigation is notable because it focuses on platform duties: risk assessment, mitigation, and enforcement against illegal harms. That is the correct direction of travel. When a platform integrates generation or distribution tools that predictably enable intimate image abuse, “we have policies” is not an answer. Safety must be an architecture, not a disclaimer.
Ireland has a basis for action, but victims still face friction
In Ireland, intimate image abuse is recognised as a serious form of harm and there are criminal offences for sharing intimate images without consent, often discussed in public guidance as part of the State response to intimate image abuse. But the practical reality remains that enforcement is difficult, cross-border takedown is slow, and victims often carry the burden of proof, reporting, and repeated exposure.The Grok scandal underlines what victims already know: the law may exist, but the pathway to protection is still too long.
A Fast-Moving Legal Response, but Gaps Remain
It was encouraging to see a fast-moving response from government and regulators, including Coimisiún na Meán’s declaration that non-consensual AI sexual images are illegal content, growing calls to strengthen Coco’s Law, and EU MEPs urging the Commission to confirm that non-consensual “undress” apps are prohibited under the AI Act. Together, these steps should help close the remaining “grey zone” gaps. The forthcoming Protection of Voice and Image Bill 2025 would further strengthen protections by targeting the misuse of a person’s image, voice, and likeness.
What an autonomy-first response looks like
If we treat nudify deepfakes as an autonomy emergency rather than a content moderation headache, certain priorities follow quickly.
1) Treat “nudify” of real persons as presumptively prohibited
There is a strong ethical case for banning tools that are designed to sexualise real persons without consent. Not “discourage”. Not “label”. Prohibit by default, and require exceptional justification for any edge-case use (for example, clearly consensual adult content creation in closed, verified contexts).
2) Build meaningful friction into generation and sharing
Scale is the enemy of safety. “Instant, unlimited, viral” is the operating model that turns abuse into a trend. Platforms and model providers should be required to introduce friction where intimate image abuse is foreseeable: rate limits, stronger identity checks for risky capabilities, robust blocking of real-person nudification prompts, and clear audit trails for enforcement.
3) Victim-centred rapid response and remedies
A credible regime needs:
fast takedown that does not require repeated re-reporting
a single, clear reporting pathway for intimate image abuse
preservation of evidence options without forcing victims to keep viewing the content
real escalation routes and human review for urgent cases
4) Independent auditing of high-risk generative features
If a system is capable of generating sexualised content from real images, it should be treated as high-risk in practice, whatever its legal classification. That means external red-teaming, published safety metrics, and regulator-accessible logs.
5) Name the harm accurately
Finally, language matters. “Bikini edits” is not a neutral category. “Nudify” is not playful. These are tools of sexual violation. Calling them what they are is not moral panic. It is conceptual clarity.
The deeper lesson
The Grok scandal is not just about one platform, one model, or one news cycle. It is a stress test of our current governance posture.
We have built an environment where:
authenticity can be manufactured
consent can be bypassed
humiliation can be automated
and accountability still arrives slowly, after the harm has scaled
If autonomy is the capacity to shape a life that is recognisably one’s own, then deepfake sexual abuse is among the clearest technologies of autonomy erosion in the digital age.
The task now is to respond with the same seriousness we reserve for physical-world coercion: not because pixels are bodies, but because the social consequences are real, the fear is real, and the loss of agency is real.