May 7, 2026

EU to Ban AI Used in Abusive Deepfakes

AI applications for the abusive creation of sexualized deepfakes are set to be banned in the European Union in the future. Representatives from the member states and the European Parliament reached agreement on a corresponding adjustment of the AI Act, the Cypriot EU Presidency announced. At the same time, other AI rules are to be simplified in order to unlock the economic potential of artificial intelligence in Europe.

Before the changes can take effect, the agreement still needs to be confirmed by the EU Parliament plenary and the Council of member-state governments. In most cases this is treated as a formality. If the reform goes through, the ban would take effect on December 2, 2026, with the EU’s AI Office taking the lead in enforcement, an office established two years ago.

In so-called deepfakes, for example, a person’s face is inserted into another video — such as a porn sequence — or the voice is artificially mimicked, making it appear as though the person is doing or saying things that never happened. The development of AI has made the creation of convincingly authentic content much easier.

Queer people are disproportionately affected

A large criminological study from 2022 shows that lesbians, gays and bisexuals are disproportionately affected by image-based sexual violence, including deepfake pornography. The research emphasizes that this form of violence often overlaps with multiple forms of discrimination — those who are young, queer, and visibly online are particularly at risk.

There are documented cases in which photos from apps like Grindr or Instagram were used to create fake porn or to blackmail people. This is known as sextortion, a portmanteau combining the terms sex and extortion. Especially among gay or bisexual men, the fear of involuntary coming out is exploited.

The new ban is also meant to explicitly target the creation of content depicting sexual abuse of children. The European Parliament’s FDP member Svenja Hahn welcomed the agreement: “AI must not be a tool for sexual violence against children,” she said after the negotiations that stretched late into the night.

Grok scandals and the German debate on digital violence

At the EU level, the topic moved into focus late last year with the AI chatbot Grok: until the U.S.-based tech company behind the software limited this function, people repeatedly commanded the AI to undress women in images of their choosing. On New Year’s Eve, the chatbot itself apologized for creating an image of two girls in their teenage years wearing “sexualized outfits.”

In Germany, the debate about sexualized digital violence gained urgency at the end of March when Collien Fernandes went public with allegations against her ex-husband Christian Ulmen. The allegations do not concern deepfakes. Fernandes accuses him of creating fake profiles in her name and distributing pornographic content. Ulmen is presumed innocent.

Since the allegations became public, there has been a nationwide discussion about digital and sexual violence against women — and extensive media coverage of it. Thousands took to the streets for demonstrations demanding more protection for victims. In connection with this debate, sexualized deepfakes and deepfake pornography were repeatedly discussed. Such material has circulated on the internet for years.

The EU already proposed a directive on the topic in 2024

The planned tightening of laws is not the first EU rule aimed at combating digital violence. Existing regulations already provide that in all member states the creation and distribution of manipulated depictions of sexual acts without the consent of those depicted should be punishable.

The corresponding EU directive has been in force since May 2024, but Germany has not yet transposed it into national law. Germany still has until next summer to implement it. Recently, Justice Minister Stefanie Hubig (SPD) announced corresponding tightening of criminal law and greater rights for victims.

The new ban in the EU-wide AI regulation, which is now taking shape, would shift the focus from punishing the act to banning the tool itself — the AI application. Negotiators emphasize that the ban should not lead to excessive restrictions on the creation or manipulation of images. For example, non-consensual bikini images, as Grok created and distributed on X, could be allowed under certain circumstances.

Mandatory watermarks for AI content postponed

Originally, the EU Commission proposed changes to the AI law to ease the burden on the economy and, in particular, the AI sector. Companies have repeatedly pressed for more time to implement the necessary adjustments for stricter rules. The agreement would grant this extra time to providers of chatbots and other services.

Parts of the law that the European AI Office was originally set to enforce as of August must now be met by December 2026. By then, providers would need to clearly mark AI-generated content — for example, by watermarking images and videos. Other rules would not be enforced by the AI Office until December 2027.

Industry fears of double regulation

Recently, industry players called for a reduction in regulatory overlap to avoid undermining Europe’s competitiveness. Chancellor Friedrich Merz (CDU) had also advocated simplifying European rules in this area. To prevent sectors such as the mechanical engineering industry from having to comply with multiple EU regulations at once, exemptions should be created.

For CDU MEP Axel Voss, these changes do not go far enough: “We need a framework that enables innovation and guarantees protection — not a patchwork of sector-specific AI rules,” commented the politician.

Marcy Ellerton
Marcy Ellerton
My name is Marcy Ellerton, and I’ve been telling stories since I could hold a pen. As a queer journalist based in Minneapolis, I cover everything from grassroots activism to the everyday moments that make our community shine. When I’m not chasing a story, you’ll probably find me in a coffee shop, scribbling notes in a well-worn notebook and eavesdropping just enough to catch the next lead.