Skip to main content
Close-up of a young woman's face appearing to glitch and fragment digitally, representing the loss and replacement of identity.
IP 博客 / A new sense of self: Denmark's copyright amendment against deepfakes

A new sense of self: Denmark's copyright amendment against deepfakes

We have all seen or read about examples of media created by generative artificial intelligence (GenAI): synthesized images, videos and audio that appear to represent real people or famous characters. From the benign to the blameworthy and every conceivable shade in between, this rising phenomenon has only just begun to make its cultural presence felt.

It is only natural, then, that as AI-generated fakes have become easier to create, harder to distinguish and more widely used, worries about their danger, legality and societal impact have intensified. In particular, concerns have been raised about the use of deepfakes to create sexually explicit images and videos, false endorsements, misleading political statements and defamatory speech. Even where the falsehood is obvious, the credibility of AI videos has reached a level where the moral offence and emotional distress caused may be severe.

These and other threats have led to calls for greater regulation. Though existing Intellectual Property (IP) frameworks, however expanded, are not likely to be a panacea for all the ills of deepfakes, Denmark is leading Europe's way in addressing the digital technology in legislation. 

Changing the law in Denmark

In June 2025, the Danish government proposed amendments to the country's Copyright Act to confront deepfakes. The modifications have cross-party support and are expected to come into force on March 31, 2026. Crucially, the reforms address both digital imitations of individuals and those of artists' performances.

In a note accompanying the amendments, the government reflected that GenAI tools mean it will soon be impossible to recognize real from fabricated material. It added: "Since images and videos also quickly become embedded in people's subconscious, digitally manipulated versions of an image or video can create fundamental doubts about – and perhaps even a completely wrong perception of – what are genuine depictions of reality." At the very worst, it said, such impersonations have the potential to become "a real democratic problem."

A man's face is lit up by the screen of his tablet as he reads the news with concern.

The proliferation of GenAI tools for images and video has made misinformation easier to produce at volume and arguably more convincing than ever. Meanwhile, technology providers are locked in an accelerating arms race for realism and market share.

In light of this, the amendments essentially introduce two new safeguards:

  1. The bill inserts a provision, section 65a, into the Copyright Act, which states that realistic digital imitations of a performing artist's performance may not be made available to the public without the consent of the performing artist. A subsection provides that this protection lasts 50 years from the year of the artist's death.
  2. The bill adds section 73a, providing that realistic digitally generated imitations of a natural person's personal, physical characteristics (such as appearance and voice) may not be made available to the public without the subject's consent. This also lasts for 50 years after the death of the person imitated. However, there is an exception in section 73 for reproductions that are "mainly an expression of caricature, satire, parody, pastiche, criticism of power, social criticism, etc.," unless the imitation constitutes misinformation that can grievously imperil the rights or interests of others. 

The Danish Presidency of the Council of the European Union hosted a Conference on Copyright and AI in Copenhagen on September 18, 2025, where the matter was further discussed. 

The European context

The Danish proposals represent a significant step to regulate deepfakes, building upon EU initiatives stretching back a number of years that tackle issues of privacy and, latterly, the use of GenAI.

First, the General Data Protection Regulation (GDPR), which was passed in 2016, gives individuals broad rights over their personal information. As well as having direct effect throughout the EU, the GDPR was transposed into UK law following Brexit and has been a model for legislation in several other countries.

High-angle view of people walking to work in modern glass-fronted office buildings.

The potential of AI for mass surveillance and societal influence has inspired some and perturbed others. To address the threat, the EU has already passed measures to prohibit activities that pose a high risk to citizens' rights and liberties.

Article 4 of the GDPR defines personal data as "any information relating to an identified or identifiable natural person." This may be effective against manipulated content if the replica is clear enough to identify the data subject and is not subject to one of the exceptions in the GDPR.

Second, the Digital Services Act (DSA), which entered into force in 2022, imposes certain obligations on digital intermediary services and (very large) online platforms. Article 16 of the DSA requires providers of hosting services to put in place mechanisms "to allow any individual or entity to notify them of the presence on their service of specific items of information that the individual or entity considers to be illegal content." This could include deepfakes, particularly if the imitation is offensive or defamatory.

Finally, Article 50 of the Artificial Intelligence Act, which was passed in 2024, sets out a number of transparency obligations relating to the provision and use of certain AI systems. These include that: "Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated." 

Implications for IP practitioners

Given the rapid development of GenAI, the regulation of deepfakes will likely evolve further. In addition to the Danish proposal, there are numerous pieces of legislation in other jurisdictions that partially address forged likenesses, including the UK Online Safety Act 2023 and the recent U.S. Take It Down Act (which is limited to intimate images). 

In October 2024, the Republic of Korea updated its Act on Special Cases Concerning the Punishment of Sexual Crimes to introduce harsh sentences for offences involving deepfakes.

A young Asian woman standing outdoors looks worried, her face half-shaded.

Almost as quickly as AI tools are advancing in capacity, countries are diverging in their regulatory attitudes. On the one hand, the United States is taking a more permissive stance to foster innovation, whereas its European and Asian counterparts tend to be optimistic but wary.

"A person who edits, synthesizes, or processes photograph, video, or audio […] targeting the face, body or voice of a person in a form that may cause sexual desire or shame against the will of the person who is subject to video, etc. […], shall be punished by imprisonment with labor for not more than seven years or a fine of not more than 50 million won."

The word "shame," specifically, is to be understood within the country's cultural mores and may implicate behaviors considered merely boorish elsewhere. Even knowingly viewing offending material may be punishable by three years' incarceration with labor or a fine of 30 million won.

Most recently, Italy has outlawed the dissemination of deepfakes that cause unjust harm as part of its AI Law (Law 132/2025). With an imposed penalty of between one and five years in prison, the application of the revised criminal code is not yet clear, wherein the interpretation of "unjust harm" is likely to be key.

The Danish amendments are notable for their broad definition of artistic works and could be a powerful weapon for copyright owners to take action against digitally generated copies. On their face, the provisions to protect individuals also appear strong, though the scopes of the exception and the exception to the exception are yet to be seen and tested.

As with all legislative changes in this hurriedly developing field, it will be important to monitor the implementation and enforcement of the Danish law to trace what impact it will have in practice. One thing is for certain: we will be seeing a lot more of deepfakes, both on our screens and in the statute books.

A version of this article was originally published in CITMA Review magazine, January - February 2026.

Filed in

Previous article
csm_Copyright_definition_made_simple_2669c2e01e
Copyright definition made simple

Learn what copyrights are, why they matter and how these Intellectual Property protections support creativity.