Artificial Intelligence, Defamation, and New Speech Frontiers
As ChatGPT and other generative AI platforms have taken off, they’ve demonstrated exciting possibilities about the potential benefits of artificial intelligence; while at the same time, have raised myriad open questions and complexities, from how to regulate the pace of AI’s growth, to whether AI companies can be held liable for any misinformation reported or generated through the platforms. Earlier this month, the first ever AI defamation lawsuit was filed, by a Georgia radio host who claims that ChatGPT falsely accused him of embezzling money. The case presents new and never-before answered legal questions, including what happens if AI reports false and damaging information about a real person? Should that person be able to sue the AI’s creator for defamation?
In this episode two leading First Amendment scholars—Eugene Volokh of UCLA Law and Lyrissa Lidsky of the University of Florida Law School— join to explore the emerging legal issues surrounding artificial intelligence and the First Amendment. They discuss whether AI has constitutional rights; who if anyone can be sued when AI makes up or mistakes information; whether artificial intelligence might lead to new doctrines regarding regulation of online speech; and more.
|