Case Explained: Making sexual deepfakes is now a crime in the UK after new legislation comes into force – Mix Vale  - Legal Perspective

Case Explained:This article breaks down the legal background, charges, and implications of Case Explained: Making sexual deepfakes is now a crime in the UK after new legislation comes into force – Mix Vale – Legal Perspective

The Reino Unido government took a decisive step in combating digital violence by putting into effect a new law that criminalizes the creation of deepfake intimate images without the consent of the person portrayed. The measure, announced by the minister of Tecnologia, Liz Kendall, in the British Parlamento, responds directly to the growing concern about the use of artificial intelligence tools to generate sexually explicit and abusive content.

The new legislation was accelerated after a series of controversies involving the chatbot Grok, integrated into the X platform, which allowed users to generate sexualized images of public and anonymous figures. The standard not only makes the production of this type of material illegal, but also aims to prevent the supply of software developed specifically for this purpose, filling an important legal gap.

Minister Kendall highlighted that this form of abuse disproportionately affects women and girls, classifying deepfakes as instruments of violence and control. The decision to bring forward the application of a device already foreseen in Data (Use and Access) Act, of 2025, signals the government’s urgency in addressing the risks associated with new generative AI technologies.

With the law in place, authorities hope to create a strong deterrent to the creation of this content by acting on the root of the problem. The measure complements the existing Online Safety Act, reinforcing the responsibility of digital platforms to protect their users from harmful and illegal materials.

What the new British legislation establishes

The new legal standard implemented in Reino Unido is comprehensive and specific in its objective of combating sexual deepfakes. The central point of the law is the criminalization of the act of creating an intimate image of another person, using artificial intelligence or any other technology, without their explicit consent. Isso means that, even if the image is never shared or distributed, the simple act of producing it already constitutes a punishable crime. The legislation was designed to eliminate the legal ambiguity that previously existed, where the criminal focus was largely on the distribution of non-consensual intimate content. Agora, the source of the cycle of abuse is directly attacked. Além of individual creation, the law also provides for penalties for those who request the production of these materials and, at a later stage, for companies that develop and sell software whose main purpose is digital “nudification” or the creation of false explicit content. The measure strengthens the Online Safety Act framework, expanding the obligations of technology platforms, which must implement more robust mechanisms to detect and prevent the use of their tools for illicit purposes, at risk of heavy fines and regulatory sanctions.

[[MVG_PROTECTED_BLOCK_0]

The controversy surrounding the chatbot Grok

The law’s acceleration was driven by a high-profile controversy involving Grok, an artificial intelligence chatbot developed by xAI and integrated into the X. The ease with which it was possible to create fake scenes, often by digitally removing people’s clothing in original photographs, generated public alarm and caught the immediate attention of lawmakers and regulatory bodies.

In response to initial criticism, platform X management implemented a restriction, making Grok’s imaging functionality accessible only to paying subscribers. However, the British government considered the measure completely insufficient to mitigate the risks. Minister Liz Kendall publicly stated that placing this dangerous tool behind a paywall did not solve the fundamental problem and did not protect potential victims. Essa resposta inadequada da empresa foi um dos fatores determinantes para que o governo agisse com celeridade, aplicando a nova legislação para deixar claro que tanto a criação quanto a facilitação de tal conteúdo não seriam toleradas.

Investigation of Ofcom and the responsibility of platforms

In parallel with the implementation of the law, Ofcom, the communications regulatory agency for Reino Unido, began a formal investigation into the X platform. The investigation aims to determine whether the company violated Online Safety Act guidelines by allowing its AI tool to be used to create abusive content.

The focus of the investigation is to assess whether the platform took the appropriate and necessary security measures to protect users, especially with regard to preventing the generation of non-consensual intimate images and material that could constitute child sexual abuse.

Ofcom has significant powers to impose sanctions, which may include fines of up to 10% of the company’s global revenue or the imposition of mandatory corrective measures. The government’s pressure is for the investigation to be completed quickly and transparently.

This case is seen as a crucial test of the effectiveness of Online Safety Act in the era of generative AI. The outcome could set an important precedent for the level of responsibility technology platforms have over the AI ​​tools they integrate into their services.

Measures to protect women and girls

During her speech, Minister Liz Kendall repeatedly emphasized that the proliferation of sexual deepfakes represents a serious form of violence and abuse, with a disproportionate impact on women and girls. Ela stated that these fake images are not “a bit of harmless fun” but rather tools used to humiliate, control and silence their victims.

The criminalization of the creation of these materials is part of the Labor government’s broader strategy to combat violence against women in the digital environment. Legislative action is seen as an essential step towards ensuring that the online space is safer and that victims of digital abuse have the backing of the law to seek justice.

Details about criminalized acts

The new law is specific in classifying as a crime the intentional creation of an image or video that appears to be intimate with a person, knowing that they did not consent to its creation. Isso covers a wide range of scenarios, from altering an existing photo to generating a completely new scene using AI.

The legislation also paves the way for future regulations that explicitly prohibit the marketing and supply of “nudity” apps and software. The government has signaled that it will act to crack down on companies that profit from facilitating this type of digital abuse.

This approach fills a critical gap, as previously the legal focus was on the act of sharing the image. By punishing the creation, the justice system can intervene before the material spreads and causes irreparable harm to the victim.

Consequences for the technology and AI industry

The Reino Unido legislation sends a clear message to artificial intelligence developers and technology companies around the world. The standard establishes that responsibility for security cannot be an afterthought, but rather a central component in the development and implementation of new generative technologies.

Companies now face much greater regulatory scrutiny and a legal obligation to build robust safeguards into their tools to prevent malicious use. Isso can include more effective filters, proactive moderation, and mechanisms that prevent the generation of harmful content from the beginning.

Repercussion in parliament

The decision to activate the law was met with broad support in the British Parlamento, with parliamentarians from different parties recognizing the urgency of legislating on the emerging dangers of artificial intelligence. The political consensus demonstrates the seriousness with which the issue is being addressed and strengthens the position of Reino Unido as one of the leaders in the regulation of technologies for the protection of individual rights in the digital environment.