From Project Liberty <[email protected]>
Subject Do we need to copyright our own faces now?
Date January 20, 2026 5:36 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
A new law aims to copyright your face and likeness

View in browser ([link removed] )

January 20th, 2026 // Did someone forward you this newsletter? Sign up to receive your own copy here. ([link removed] )

Do we need to copyright our own faces now?

In 2021, Marie Watson, a Danish video-game blogger, received an image ([link removed] ) of herself from an unfamiliar Instagram account.



The image was unmistakably her own, lifted from her public feed. But it had been altered to depict her nude. She was the victim of a sexualized deepfake.



Since then, deepfakes have spread rapidly, targeting women at disproportionate rates ([link removed] ) and increasingly blurring the lines between personal harm and public misinformation.



Last year, Denmark moved to confront the rise of deepfakes ([link removed] ) with a novel legal approach, extending copyright protections to cover an individual’s likeness and digital identity. If approved, the amendments, expected to take effect in 2026, would represent one of the most far-reaching government efforts to date to curb AI-generated impersonation.

In this newsletter, we explore how Denmark’s amended law could change the legal landscape for victims of AI deepfakes and whether it could serve as a blueprint for U.S. and global AI regulations.

// A first-of-its-kind law

Denmark’s amendments to its existing Copyright Act would ensure citizens can demand that online platforms remove deepfake content when it is shared without consent.

-

Platforms would be fined for failing to comply.

-

The proposed bill also protects artists if “realistic, digitally generated imitations” of their performances are generated without consent.

-

Parodies and satire would still be permitted.

“Since images and videos also quickly become embedded in people’s subconscious, digitally manipulated versions of an image or video can create fundamental doubts about—and perhaps even a completely wrong perception of—what are genuine depictions of reality,” Danish lawmakers wrote ([link removed] ) . “The agreement is therefore intended to ensure the right to one’s own body and voice.”



Thomas Heldrup, the head of content production and enforcement of the Danish Rights Alliance, explained ([link removed] ) why they chose to go the route of Copyright Law: “We believe [it] will give us the best conditions to enforce the rights when reporting [deepfakes] to online platforms.”



And yet, it remains unclear how the amendment will integrate with existing EU frameworks, like the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA). The GDPR covers biometric data protection, while the DSA moderates illegal content, requiring takedowns of content that violates privacy laws. According to a report by the EU’s Parliamentary Research Service ([link removed] ) , which analyzed the implications of the Danish amendments, the Danish law would be enforced by the EU’s DSA, but the EU report expressed concern about creating new rights on top of existing laws.

// The many harms of deepfakes

Deepfakes made headlines ([link removed] ) in 2024 when sexually explicit images of Taylor Swift circulated on the platform X. In 2025 alone, AI video apps ([link removed] ) generated over 8 million AI videos ([link removed] ) (though the exact figure is hard to measure).

-

In past newsletters ([link removed] ) , we discussed the implications of AI video apps like Sora 2, which NPR characterized as ([link removed] ) giving deepfakes “a publicist and a distribution deal.” Last week, X was awash with alarming videos ([link removed] ) showcasing recent deepfake technology.

-

Audio deepfakes ([link removed] ) are just as prolific and harmful. It only takes a few seconds of captured audio to create a convincing voice replication, leading to ballooning consumer fraud.

-

According to Sensity AI ([link removed] ) , a synthetic media monitoring company, 96% of deepfakes are nonconsensual pornography.

-

In New Jersey ([link removed] ) , school bullying ramped up with the help of generative AI, with classmates using it to “nudify” images of underage girls.

// People want control of their data

Deepfakes are what happens when the data economy grows up and learns to impersonate. Once your face, voice, and mannerisms are treated as raw material, the boundary between “my data” and “me” collapses.

Denmark’s proposal is part of a broader trend. People are no longer just worried about privacy in the abstract. They are concerned about losing authorship over their lives online.

Research from Project Liberty Institute ([link removed] ) echoes this alarm. Across seven countries, only 18% of respondents say they feel firmly in control of their personal data and how it is collected. The second, U.S.-based study ([link removed] ) found:

-

67% of Americans admit they have little or no control over the personal data collected by social media platforms or other online companies.

-

91% support the creation of a national data privacy law.

// The copyright law vs. personality rights

But is copyright law the best approach to protecting people from deepfakes?



An alternative framework to consider is that of personality rights ([link removed] ) , also known as publicity rights. Personality rights are the rights an individual has to control the commercial use of their name, image, and likeness. Unlike copyright law, which is a transferable, purchasable commodity, personality rights are inalienable and cannot be commodified.



In a critique of Denmark’s approach, Alice Lana, a lawyer and board member of the Copyright Observatory Institute, speculated to Tech Policy Press ([link removed] ) that it could distort the purpose of copyright law and create unintended consequences. “Copyright can be a trap; it can turn our bodies into consumer goods. What kinds of precedents does it open?”



In the U.S., enforcement of personality rights varies by state, definitions of what is protected remain vague ([link removed] ) in legal settings, and the law does not comprehensively address the risks posed by AI.

A clear comparison of these frameworks is explained in Scientific American.

-

During a 2024 study ([link removed] ) , researchers posted deepfakes on the platform X and reported half as copyright infringement, and half as “nonconsensual nudity.”

-

The platform removed the deepfakes reported under copyright claims, responding more quickly to legal precedent than to privacy terms.

-

By leveraging copyright law, Denmark might be taking a practical approach to quickly removing deepfakes.

// Momentum is building

In the U.S., policy is catching up with consumer alarm. (Alarm spiked recently after Grok rolled out new features ([link removed] ) last week, allowing users to generate nonconsensual, sexual deepfakes.)



At the federal level, the TAKE IT DOWN Act ([link removed] ) , which was enacted in May 2025 and will go into effect this year, is a new federal law criminalizing the nonconsensual publication of intimate images, including deepfakes.

But as this recent op-ed in The New York Times ([link removed] ) outlined, federal laws might make it harder to ensure AI models are safe. Robust federal policies need to be paired with a legal safe harbor that would allow AI companies to test their own models for CSAM and nonconsensual sexual imagery without fear of criminal prosecution.

At the state level:

-

Several states have already instituted digital media literacy ([link removed] ) curricula, including California, New Jersey, Delaware, and Texas. Research ([link removed] ) shows that education about deepfakes reduces their dissemination and increases the rate at which they can be identified as AI-generated.

-

As of December 2025, approximately 47 U.S. States ([link removed] ) have enacted laws regulating deepfakes. Project Liberty ([link removed] ) advised on the Digital Choice Act (HB418), legislation in Utah that gives users unprecedented control over their social media data.

Globally, countries including South Korea, Canada, China, and the U.K. have taken approaches ([link removed] ) ranging from requiring transparency to criminalizing, educating, and promoting public discourse on the threats posed by deepfakes and unethical AI use.



Meanwhile, deepfake detection technologies are getting better. By combining voice detection, biomarker tracking, metadata, and color abnormalities, platforms are beginning to experiment ([link removed] ) with ways to detect and take down deepfakes.

// Reclaiming our digital identities

It remains to be seen if Denmark’s copyright amendment is approved, how it will integrate into EU law, and if copyright law is the best approach to protecting an individual’s digital identity.



It’s possible that Denmark’s policy could become a blueprint for other policymakers, or it could simply serve as evidence that policymakers are responding to constituents who demand greater control, more platform accountability, and a sense of authenticity in an era of synthetic media.

📰 Other notable headlines

// 🕹 Meta, once boldly committed to the metaverse, has discontinued its metaverse for the workplace, according to an article in The Verge ([link removed] ) . (Paywall).

// 💵 Ads are coming to ChatGPT. An article in WIRED ([link removed] ) explained how they’ll work. (Paywall).

// 🌐 Wikipedia signed major AI firms to new priority data access deals, according to an article in Ars Technica ([link removed] ) . (Free).

// 📧 AI has arrived in Gmail. Gemini can create a to-do list based on recent emails, among other new tricks. But there are implications for your privacy, according to an article in The New York Times ([link removed] ) . (Paywall).

// 🎰 An article in The Atlantic ([link removed] ) asked, why is the media obsessed with prediction markets like Polymarket? (Paywall).

// 📱 According to an article in The Markup ([link removed] ) , California is investigating Elon Musk’s AI company after an ‘avalanche’ of complaints about sexual content. (Free).

// 🎙 An episode of the Hard Fork podcast ([link removed] ) featured Jonathan Haidt to talk about social media and AI. (Free).

// 💬 WhatsApp has become a core technology around the world, relied on by governments and extended families alike. An article in The New Yorker ([link removed] ) reflected on the state of global communications controlled by Meta. (Paywall).

// 🤔 An article in Gizmodo ([link removed] ) reported on Anthropic’s warning that AI will worsen inequality. (Free).

// 📰 Claude is taking the AI world by storm, and even non-nerds are blown away. Developers and hobbyists are comparing the viral moment for Anthropic’s Claude Code to the launch of generative AI, according to an article in the Wall Street Journal ([link removed] ) . (Paywall).

// 💼 The rise of AI is disrupting labor markets at an unprecedented pace, driving down wages and eliminating entire professions faster than new ones can emerge. An article in Project Syndicate ([link removed] ) asked, what happens when paid work disappears? (Paywall).

Partner news

Public Domain Day 2026: The case of the disappearing copyright

January 21 | Virtual & San Francisco, CA

Internet Archive ([link removed] ) and partners are hosting a series of Public Domain Day events to celebrate works from 1930 and 1925 entering the public domain. With virtual ([link removed] ) and in-person ([link removed] ) programs, the celebration highlights newly free cultural icons like Nancy Drew, The Maltese Falcon, All Quiet on the Western Front, and classic songs now open for reuse and remixing.

// The 2026 Ethics and Technology Practitioner Fellows

Stanford has announced its 2026 cohort of Ethics and Technology Practitioner Fellows, welcoming 13 mid-career professionals working at the frontlines of technology, policy, and society. Click here ([link removed] ) to learn more about the fellows.

What did you think of today's newsletter?

We'd love to hear your feedback and ideas. Reply to this email.

// Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.

Thank you for reading.

Facebook ([link removed] )

LinkedIn ([link removed] )

Twitter ([link removed] )

Instagram ([link removed] )

Project Liberty footer logo ([link removed] )

10 Hudson Yards, Fl 37,

New York, New York, 10001

Unsubscribe ([link removed] ) Manage Preferences ([link removed] )

© 2026 Project Liberty LLC
Screenshot of the email generated on import

Message Analysis

  • Sender: n/a
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a