From Project Liberty <[email protected]>
Subject How misinformation could affect the election
Date October 8, 2024 3:32 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
94% of US citizens are concerned that the spread of misinformation will impact the upcoming election

View in browser ([link removed] )

October 8th, 2024 // Did someone forward you this newsletter? Sign up to receive your own copy here. ([link removed] )

How misinformation could affect the election

Will AI-powered misinformation sway the upcoming US election?

The US election is only 28 days away, and technologists, researchers, policymakers, and citizens are bracing for the worst, as generative AI technology has dramatically increased the amount of misinformation online.

In one prominent example from earlier this year, a series of fake robocalls ([link removed] ) in New Hampshire used an AI-generated impersonation of President Biden's voice to urge Democrats not to vote in the state's primary.

As we near the 2024 US Presidential Election, the (possible) role that foreign misinformation played ([link removed] ) in the 2016 US election looms large. But despite the proliferation of AI-generated false news, progress has been made in the last eight years—from content moderation policies at tech firms to laws passed by numerous states to new AI tools that identify deepfakes.

Will it be enough? Let’s take a closer look.

// The growth of misinformation

Misinformation in the age of AI takes on many forms: news sites built completely by AI, social media posts and images, misguided AI chatbot answers, and audio and video deepfakes.

- A paper released by Google researchers ([link removed] ) earlier this year found meteoric growth of image-related misinformation starting in 2023.
- In May of last year, Newsguard ([link removed] ) , an organization that tracks misinformation and a Project Liberty Alliance ([link removed] ) member, found 49 news and information sites that were completely written by AI chatbots. Today, less than 18 months later, it has identified over 1,000 such sites.
- Over the summer, the US Justice Department announced that it had disrupted a Russian propaganda campaign ([link removed] ) using fake social media accounts, powered by AI. Meanwhile, OpenAI banned ChatGPT accounts ([link removed] ) that were linked to an Iranian group attempting to sow division among US voters.

//

94% of US citizens are concerned that the spread of misinformation will impact the upcoming election.

//

// Citizens are alarmed

- A poll last month by Adobe’s Content Authenticity Initiative found ([link removed] ) that 94% of US citizens are concerned that the spread of misinformation will impact the upcoming election, and 87% of respondents said that the rise of generative AI has made it more challenging to discern fact from fiction online.
- A 2023 poll by the University of Chicago ([link removed] ) found that more than half of Americans believe that the federal government should outlaw all uses of AI-generated content in political ads.
- New research has found that Gen Z and Millennials might even be more vulnerable to fake news ([link removed] ) than other generations.

// The bias for action

The ubiquity of AI-generated misinformation requires a multidimensional response.

What big tech is doing

- Meta has made changes to its algorithms ([link removed] ) in recent years. In 2021, Meta decided to push political and civic content lower in its feeds. Earlier this year, the company announced ([link removed] ) that it would deprioritize the recommendation of political content on Instagram and Threads.
- X made changes to its AI ([link removed] ) chatbot after five secretaries of state warned it was spreading election misinformation ([link removed] ) .
- TikTok outlined the steps ([link removed] ) it's taking to protect election integrity in 2024, including deterring covert influence operations and preventing paid political advertising on its site.

What policymakers are doing

- In 2019, Texas became the first state ([link removed] ) to ban the creation and distribution of deepfake videos intended to hurt a candidate or influence an election.
- Today, 26 US states have passed or are considering bills regulating the use of AI in election-related communications, according to an analysis by Axios ([link removed] ) .

At the federal level, agencies like the Federal Elections Commission have walked back efforts to control election-related misinformation, citing that it lacks the statutory authority ([link removed] ) to make rules about misrepresentations using deepfake audio or video. An article in The Atlantic ([link removed] ) argued that the deeper issue at the federal level is that Congress has not clearly given any agency the responsibility to keep political advertisements grounded in reality.

// Fighting misinformation vs. protecting speech

Taking down misleading or false content can encroach on protections for free speech. In one notable example of overreach ([link removed] ) , in the lead-up to the 2020 Presidential election, the FBI alerted Facebook that Russian operatives were possibly attempting to spread false information about Hunter Biden and his relationship to a Ukrainian energy company. Based on this information, Facebook (now Meta) suppressed reporting by The New York Post about emails found on Hunter Biden’s laptop.

Later, it turned out the story was not Russian disinformation, but was actually true. Mark Zuckerberg of Meta conceded that Meta shouldn’t have demoted the story on its newsfeeds.

“In 2021, senior officials from the Biden Administration, including the White House, repeatedly pressured our teams for months to censor certain COVID-19 content, including humor and satire, and expressed a lot of frustration with our teams when we didn’t agree,” he wrote ([link removed] ) in a letter to the House Judiciary Committee.

Tech platforms and regulators need to walk a tightrope: identifying potentially misleading or fake information, fact-checking it, and deciding if and when to take it down, all while protecting speech.

// New incentive structures for social media

Social media is not the only location for misinformation online, but it’s becoming increasingly clear—and new research backs up the insight—that the incentive systems of social media platforms encourage users to spread misinformation.

How? One study by researchers at Yale ([link removed] ) found that a small number of Facebook super-users and super-sharers of content had a disproportionately outsized impact in the spread of misinformation. They found that the 15% of the most habitual Facebook users were responsible for 37% of the false headlines shared. In 2021, The Center for Countering Digital Hate discovered a similar disproportionality: just twelve individuals spread two thirds ([link removed] ) of anti-vaccination content on Facebook and X.

Combating misinformation requires a number of approaches, policies, and technologies, but no solution will be effective if we don’t address the underlying incentive systems and architecture of our digital spaces.

Project Liberty is focused on innovations at both the governance and protocol levels of the web. Changing the incentive structures that underpin social media platforms like TikTok could change the content that gets shared and elevated, as well as the amount of control users have over their online experience. This vision anchors The People’s Bid to acquire TikTok ([link removed] ) .

// The battle for truth

And yet, redesigning the incentive structures of the internet won’t happen before November’s election. In the meantime, not all hope is lost.

- New research from the University of San Diego ([link removed] ) found that machine learning algorithms significantly outperform human judgment in detecting people’s lies.
- New AI-powered technology ([link removed] ) can detect what's a deepfake versus what’s an actual image, video, or audio file.
- The federal government has banned robocalls ([link removed] ) using voices generated by AI and has offered cash prizes ([link removed] ) for solutions to mitigate harms from voice cloning frauds.

For individuals, there are steps everyone can take (especially you, Gen Z and Millennials!) to ensure they’re not susceptible to misleading claims and misinformation online.

- An article in the BBC ([link removed] ) detailed a four-step method to spot misinformation.
- A National Public Radio article ([link removed] ) outlines recommended steps if you see someone you know falling victim to misinformation.

Have you seen election-related misinformation recently? Send us a screenshot!

Project Liberty events

// Project Liberty President Tomicah Tillemann and Sheila Warren, CEO of the Crypto Council for Innovation, are hosting a LinkedIn Live event ([link removed] ) this Thursday at 12pm ET about The People’s Bid and the future of TikTok. It’s open to the public. Register here ([link removed] ) .

Other notable headlines

// 🕶 Two college students used Meta’s smart glasses to dox people in real time, raising issues of privacy, according to an article in The Verge ([link removed] ) .

// 👤 Has social media fueled a teen-suicide crisis? Mental-health struggles have risen sharply among young Americans, according to an article in The New Yorker ([link removed] ) .

// 🚘 License plate readers are creating a US-wide, searchable database that reveals Americans’ political leanings and more, according to an article in WIRED ([link removed] ) .

// 🌧 Hurricane Helene’s response has been hampered by misinformation and conspiracy theories online, according to an article in The Washington Post ([link removed] ) .

// 🧠 An article in MIT Technology Review ([link removed] ) reported on a new law in California protects consumers’ brain data that could be used to infer our thoughts.

// 🏳️‍🌈 LGBTQ+ youth see content creators and AI chatbots as social lifelines. An article in Fast Company ([link removed] ) highlighted how social media and parasocial relationships offer a sense of connection.

Partner news & opportunities

// Trust Conference 2024: Voices from the frontlines of conflict

October 22-23, 2024 in London

The Trust Conference ([link removed] ) , hosted by Thomson Reuters ([link removed] ) , is where leading journalists will share their experiences reporting from conflict zones. This year’s conference will dive into the challenges journalists face in conflict reporting, strategies for safeguarding truth-tellers, and the role of responsible business in shaping global democracy. Apply ([link removed] ) to attend.

// ASML Fellowship Program now accepting applications

Deadline: November 1, 2024

The Applied Social Media Lab ([link removed] ) (ASML) announced the launch of its new Fellowship Program ([link removed] ) , offering participants an opportunity to develop social media technologies that prioritize the public good. Fellows will collaborate with a network of tech experts, civil society advocates, and academics to advance their projects, with opportunities to present and receive feedback from the BKC community. Applications are due November 1st—apply here ([link removed] ) .

// FAI’s Samuel Hammond featured on the Statecraft Podcast

Senior Economist Samuel Hammond of the Foundation for American Innovation ([link removed] ) (FAI) appeared on the Institute for Progress ([link removed] ) ’s Statecraft podcast to discuss the political economy of state capacity with host Santi Ruiz and Jennifer Pahlka, former US Deputy Chief Technology Officer. Listen here ([link removed] ) .

Join The People's Bid

Have you checked out The People's Bid?

Learn more here ([link removed] ) and add your name ([link removed] ) as a supporter.

What did you think of today's newsletter?

We'd love to hear your feedback and ideas. Reply to this email.

/ Project Liberty builds solutions that help people take back control of their lives in the digital age by reclaiming a voice, choice, and stake in a better internet.

Thank you for reading.

Facebook ([link removed] )

LinkedIn ([link removed] )

Sin título-3_Mesa de trabajo 1 ([link removed] )

Instagram ([link removed] )

Project Liberty footer logo ([link removed] )

501 W 30th Street, Suite 40A,

New York, New York, 10001

Unsubscribe ([link removed] ) Manage Preferences ([link removed] )

© 2024 Project Liberty
Screenshot of the email generated on import

Message Analysis

  • Sender: n/a
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a