Do we need to copyright our own faces now?
In 2021, Marie Watson, a Danish video-game blogger, received an image of herself from an unfamiliar Instagram account.
The image was unmistakably her own, lifted from her public feed. But it had been altered to depict her nude. She was the victim of a sexualized deepfake.
Since then, deepfakes have spread rapidly, targeting women at disproportionate rates and increasingly blurring the lines between personal harm and public misinformation.
Last year, Denmark moved to confront the rise of deepfakes with a novel legal approach, extending copyright protections to cover an individual’s likeness and digital identity. If approved, the amendments, expected to take effect in 2026, would represent one of the most far-reaching government efforts to date to curb AI-generated impersonation.
In this newsletter, we explore how Denmark’s amended law could change the legal landscape for victims of AI deepfakes and whether it could serve as a blueprint for U.S. and global AI regulations.
// A first-of-its-kind law
Denmark’s amendments to its existing Copyright Act would ensure citizens can demand that online platforms remove deepfake content when it is shared without consent.
-
Platforms would be fined for failing to comply.
-
The proposed bill also protects artists if “realistic, digitally generated imitations” of their performances are generated without consent.
-
Parodies and satire would still be permitted.
“Since images and videos also quickly become embedded in people’s subconscious, digitally manipulated versions of an image or video can create fundamental doubts about—and perhaps even a completely wrong perception of—what are genuine depictions of reality,” Danish lawmakers wrote. “The agreement is therefore intended to ensure the right to one’s own body and voice.”
Thomas Heldrup, the head of content production and enforcement of the Danish Rights Alliance, explained why they chose to go the route of Copyright Law: “We believe [it] will give us the best conditions to enforce the rights when reporting [deepfakes] to online platforms.”
And yet, it remains unclear how the amendment will integrate with existing EU frameworks, like the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA). The GDPR covers biometric data protection, while the DSA moderates illegal content, requiring takedowns of content that violates privacy laws. According to a report by the EU’s Parliamentary Research Service, which analyzed the implications of the Danish amendments, the Danish law would be enforced by the EU’s DSA, but the EU report expressed concern about creating new rights on top of existing laws.
// The many harms of deepfakes
Deepfakes made headlines in 2024 when sexually explicit images of Taylor Swift circulated on the platform X. In 2025 alone, AI video apps generated over 8 million AI videos (though the exact figure is hard to measure).
-
In past newsletters, we discussed the implications of AI video apps like Sora 2, which NPR characterized as giving deepfakes “a publicist and a distribution deal.” Last week, X was awash with alarming videos showcasing recent deepfake technology.
-
Audio deepfakes are just as prolific and harmful. It only takes a few seconds of captured audio to create a convincing voice replication, leading to ballooning consumer fraud.
-
According to Sensity AI, a synthetic media monitoring company, 96% of deepfakes are nonconsensual pornography.
-
In New Jersey, school bullying ramped up with the help of generative AI, with classmates using it to “nudify” images of underage girls.
// People want control of their data
Deepfakes are what happens when the data economy grows up and learns to impersonate. Once your face, voice, and mannerisms are treated as raw material, the boundary between “my data” and “me” collapses.
Denmark’s proposal is part of a broader trend. People are no longer just worried about privacy in the abstract. They are concerned about losing authorship over their lives online.
Research from Project Liberty Institute echoes this alarm. Across seven countries, only 18% of respondents say they feel firmly in control of their personal data and how it is collected. The second, U.S.-based study found:
// The copyright law vs. personality rights
But is copyright law the best approach to protecting people from deepfakes?
An alternative framework to consider is that of personality rights, also known as publicity rights. Personality rights are the rights an individual has to control the commercial use of their name, image, and likeness. Unlike copyright law, which is a transferable, purchasable commodity, personality rights are inalienable and cannot be commodified.
In a critique of Denmark’s approach, Alice Lana, a lawyer and board member of the Copyright Observatory Institute, speculated to Tech Policy Press that it could distort the purpose of copyright law and create unintended consequences. “Copyright can be a trap; it can turn our bodies into consumer goods. What kinds of precedents does it open?”
In the U.S., enforcement of personality rights varies by state, definitions of what is protected remain vague in legal settings, and the law does not comprehensively address the risks posed by AI.
A clear comparison of these frameworks is explained in Scientific American.
-
During a 2024 study, researchers posted deepfakes on the platform X and reported half as copyright infringement, and half as “nonconsensual nudity.”
-
The platform removed the deepfakes reported under copyright claims, responding more quickly to legal precedent than to privacy terms.
-
By leveraging copyright law, Denmark might be taking a practical approach to quickly removing deepfakes.
// Momentum is building
In the U.S., policy is catching up with consumer alarm. (Alarm spiked recently after Grok rolled out new features last week, allowing users to generate nonconsensual, sexual deepfakes.)
At the federal level, the TAKE IT DOWN Act, which was enacted in May 2025 and will go into effect this year, is a new federal law criminalizing the nonconsensual publication of intimate images, including deepfakes.
But as this recent op-ed in The New York Times outlined, federal laws might make it harder to ensure AI models are safe. Robust federal policies need to be paired with a legal safe harbor that would allow AI companies to test their own models for CSAM and nonconsensual sexual imagery without fear of criminal prosecution.
At the state level:
-
Several states have already instituted digital media literacy curricula, including California, New Jersey, Delaware, and Texas. Research shows that education about deepfakes reduces their dissemination and increases the rate at which they can be identified as AI-generated.
-
As of December 2025, approximately 47 U.S. States have enacted laws regulating deepfakes. Project Liberty advised on the Digital Choice Act (HB418), legislation in Utah that gives users unprecedented control over their social media data.
Globally, countries including South Korea, Canada, China, and the U.K. have taken approaches ranging from requiring transparency to criminalizing, educating, and promoting public discourse on the threats posed by deepfakes and unethical AI use.
Meanwhile, deepfake detection technologies are getting better. By combining voice detection, biomarker tracking, metadata, and color abnormalities, platforms are beginning to experiment with ways to detect and take down deepfakes.
// Reclaiming our digital identities
It remains to be seen if Denmark’s copyright amendment is approved, how it will integrate into EU law, and if copyright law is the best approach to protecting an individual’s digital identity.
It’s possible that Denmark’s policy could become a blueprint for other policymakers, or it could simply serve as evidence that policymakers are responding to constituents who demand greater control, more platform accountability, and a sense of authenticity in an era of synthetic media.