CDT Research Shows AI Chatbots Serve Up Election Misinformation
| |
As the 2024 United States elections draw closer, chatbots powered by artificial intelligence (AI) have grown in popularity and availability, introducing a new and largely untested vector for election-related information. Voters with and without disabilities may use chatbots to ask practical questions about the time, place, and manner of voting, but new CDT research shows this raises significant risks.
In a new report, CDT tested five major chatbots to see how they'd respond to questions about voting with a disability. The results showed that chatbots are likely to spread election-related and disability-related misinformation that could impede users from exercising their right to vote, or undermine voters’ confidence in the election itself. Our testing of how five major chatbots responded to a set of queries revealed that:
- A quarter of responses could dissuade, impede, or prevent users from exercising their right to vote — especially users navigating the complicated process of voting while disabled.
- More than one third of all answers included false information — including incorrect voter registration deadlines and misinformation about the availability of curbside voting.
- Every model hallucinated at least once, sharing information about laws, voting equipment, and disability rights organizations that simply don’t exist.
The report demonstrates how the harms chatbots can cause in an election context are amplified for voters with disabilities, due to barriers like accessibility of polling places that make voting a more difficult process, and a confusing patchwork of laws across the United States regulating voting for people with disabilities. In response to the information integrity concerns surfaced by the report, we provide recommendations for how users and chatbot developers, respectively, can best use and improve these systems.
| |
Cover graphic for CDT report, entitled "In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools." Illustration of a cell phone and social media and messaging posts, floating amongst a dark and choppy body of water.
|
|
— While companies have an important role to play in stopping IBSA, so too do other institutions, including schools. Last week, CDT published groundbreaking new research revealing the high prevalence of tech-powered sexual harassment in K-12 schools. Nearly four in ten students say that they have heard about non-consensual intimate images (NCII) that depicts individuals associated with their school, representing 5.97 million out of 15.3 million public high school students in the U.S. CDT’s report showed that, while schools are quick to respond when NCII is reported, few have proactive policies to prevent it in the first place.
— CDT joined with allies to file an amicus brief in the Supreme Court in Free Speech Coalition v. Paxton, arguing that shortcomings in currently available age verification methods will prevent and chill access to constitutionally protected speech, and increase risks to both privacy and security online. We also jointly filed an amicus brief in Zuckerman v. Meta, arguing that intermediary liability law Section 230 confers immunity for user-empowerment technologies like the browser extension Unfollow Everything 2.0.
| |
CDT report, entitled “Moderating Maghrebi Arabic Content on Social Media.” Illustration of a hand, wearing rings as well as bracelets with hand of Fatima around its wrist, holding a red and purple phone that shows messages with Arabic letters in them being moderated or acted upon.
|
|
| |
CDT in the Press — CDT’s Aliya Bhatia and Ariana Aboulafia co-authored a new op-ed in Teen Vogue, arguing that age verification technology would create new barriers for young disabled people.
| |
— CDT’s Alex Givens and Elizabeth Laird discussed CDT’s research on non-consensual intimate imagery in schools with The Atlantic: Generative-AI tools have “increased the surface area for students to become victims and for students to become perpetrators,” Laird told the publication.
| |
Graphic for CDT's Tech Prom on November 14, 2024 at The Anthem in Washington, D.C. Big block letters (blue and purple) being showered in confetti.
|
|
| |
Graphic with thought bubble with patriotic decorations and text: "Future of Speech Online (FOSO): AI, Elections, & Speech. September 16-17, 2024."
|
|
— In September, CDT and Stand Together Trust hosted the annual Future of Speech Online conference, where we explored the intersection of AI, elections, and speech. Weren’t able to join us? Recordings of the event are online: catch up with our deep dives into what’s needed to bolster free expression and protect free and fair elections globally, and the risks and potential benefits of new AI technologies for the elections taking place around the world this year.
| |
Drew Courtney. CDT's Director of Communications. Headshot image of man in front of greenery.
|
|
How long have you been working in digital rights? For my entire career and not for very long at all. I've worked on civil rights and civil liberties issues since I started doing advocacy work almost 20 years ago, but I've only been able to focus on those issues in the digital space since I joined CDT earlier this year. It's been great to work with colleagues who have such deep expertise in this area.
What is your proudest moment while here at CDT? Right now! CDT's program teams are really firing on all cylinders in the lead up to the election. We're putting out exciting, important work every week — and sometimes every day — all through the autumn. It's gratifying to help all that work get in front of the right audiences.
What is the best book you've read recently?? I recently reread Hillary Mantel's Wolf Hall trilogy. I was reminded just how extraordinary it is.
What is the most recent cultural activity you’ve been to? I went to see PJ Harvey perform last week. She's amazing—worth staying up for on a school night!
| |
|
|
|
|