Breaking Down Proposals for Nonconsensual Deepfake Regulation
The technology to create nonconsensual deepfakes has existed for years, but a new wave of AI tools released without safeguards has made it easy for anybody (including
children) to generate deceptive and sexually explicit images of other people. To address this problem, the House Committee on Oversight and Accountability held a
hearing to explore different regulatory approaches – some of which would be far friendlier to AI companies, online platforms, and even perpetrators.
Unsurprisingly, industry advocates favor minimal regulation. Carl Szabo, who served as a witness for the tech trade association NetChoice,
cautioned against “innovation-chilling” laws and claimed that existing regulations were mostly sufficient to address AI harms. Immediately after Szabo’s comments, U.C. Irvine law professor Dr. Ari Ezra Waldman
remarked that calls to enforce current laws were “the last bastion for those who want a deregulatory agenda,” and that Congress needed to pass legislation that would explicitly deter deepfake pornography. NetChoice has
endorsed two anti-deepfake bills authored by the American Legislative Exchange Council (ALEC), which serves as a resource for conservative policymakers. Notably, the bills would
criminalizethe production of deepfake child sexual abuse material (CSAM) while merely creating
civil penalties for the production of nonconsensual deepfakes. Most state
revenge pornography laws, in contrast, establish criminal penalties for circulating real images without the victim’s consent. In his testimony, Professor Waldman
arguedthat civil penalties were a “step forward” but ultimately insufficient, because they put the
burden of enforcementon victims – a position that subcommittee Chairwoman Nancy Mace
strongly agreed with.
Hearing Highlight
John Shehan, who serves as the Vice President of the National Center for Missing and Exploited Children (NCMEC), told lawmakers that most generative AI startups aren’t taking basic steps to prevent users from creating AI CSAM. According to Shehan, the well-established company Stability AI isn’t even registered to submit reports to NCMEC’s CyberTipline, despite being partnered with
Amazon Web Services and valued at over
one billion dollars.