US election: social media algorithms are boosting hate and lies. Here's what we found.
Friend,
Millions of Americans will head to the polls on Tuesday to choose their next president.
This year’s campaign was marked by the dissemination of harmful deepfakes, and the spread of hate, conspiracies, and violent rhetoric – all boosted by social media’s algorithms.
CCDH kept a close eye on the platforms enabling the false narratives and the hateful attacks that can mislead voters and threaten the integrity of the US election.
Democracy is at stake.
Here’s a recap of our research on the US election:
Elon Musk's X: a misinformation theme park
X's Community Notes are failing to counter false and misleading claims about the US election, despite Elon Musk calling it the "best source of truth on the internet.” We analyzed a sample of accurate Community Notes and found that 74% aren't being shown to users.
In August, we showed that Elon Musk posted false and misleading election claims at least 50 times this year, with nearly 1.2 billion views – with no Community Notes.
Following assassination attempt on Donald Trump, in July, we found that just 100 top posts promoting conspiracy theories about the event were seen 215 million times on X. Find out more in CNN.
Instagram: abuse against women candidates
We reported 1,000 abusive comments to Instagram that targeted bipartisan women candidates running for office in 2024. A week later, the platform had taken no action against 93% of these comments, including sexist and racist abuse, death and rape threats. Abuse and violence have no place in our politics.
AI deepfakes: did they really say that?
We tested 6 popular AI voice cloning tools and found they could generate convincing election disinformation in 80% of our tests - including in the voices of Kamala Harris and Donald Trump.
Convincing AI-generated images of political endorsements made by fake Americans are getting millions of interactions on Facebook ahead of the US election. These unlabeled posts include deepfakes of military veterans, police officers, and protestors.
We tested X's AI tool Grok and found that the platform can easily generate misleading images about the US election, including disinformation about candidates and election fraud.
Our report found that Midjourney AI keeps generating misleading images of US politicians in 40% of our tests, despite the platform’s policy to stop election disinformation. This report is a follow-up to our Fake Image Factories research, published in March, when we tested Midjourney, ChatGPT Plus, DreamStudio, and Microsoft’s Image Creator with text prompts about the 2024 US presidential election.
Social media and AI companies are playing a leading and shameful role in destroying one of the pillars of our civilization: our democracy. They must stop turning a blind eye to hate, disinformation, and conspiracies before it’s too late.
But there is hope.
We can hold social media giants accountable and responsible when they let harm flood our digital spaces. CCDH’s STAR Framework shows the pathway to this future, where human rights, democracy and our communities are protected. Learn more about STAR.
Together, we can protect our democracy.
Best wishes,
The CCDH Team
General:
[email protected] | Press:
[email protected]
You are receiving this email as you subscribed to CCDH's email list. Manage personal data, email subscriptions, and recurring donations here or unsubscribe from all.