Within two hours, 11 million people saw the Utah shooting on social media.
View in browser ([link removed] )
September 16th, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here. ([link removed] )
Why violent videos are at the top of your feed
This newsletter touches on violence. Please take care when reading.
Within minutes of the shooting of Charlie Kirk in Utah, graphic videos of the killing were uploaded on X, YouTube, Instagram, and other social platforms.
Within two hours, the videos had been watched more than 11 million times ([link removed] ) .
In many cases, children had already seen the videos shared via social platforms or group chats before their parents had a chance to talk with them.
Graphic videos on the internet are not new, but what is new is how fast such videos can travel—amplified by algorithms, shared in private chats, and tweaked and replicated to circumvent content moderation policies and safeguards.
The public assassination of Kirk—broadcast around the world and viewed by millions, including children—underscores the broader challenge of governing a digital ecosystem where moderation, algorithms, and virality collide.
In this newsletter, we examine why and how violent videos rise to the top of our feeds, the decisions that led us here, and what we can do about it.
// When horror goes viral
After the shooting in Utah last Wednesday, mainstream media outlets shared edited videos that omitted the shooting, but users quickly took to social media platforms to publish uncensored videos. By the next morning, many videos had been taken down, but searches for misspellings like “Charlee” and “Charles Kirik” on TikTok still returned close-up videos of the shooting.
The cat-and-mouse game of posters tagging content under intentional misspellings and platforms trying to remove or flag as much content as possible reveals just how challenging it is to create safe online spaces at scale.
“This one moved at a pace you can’t keep up with,” said Jill Murphy ([link removed] ) , the chief content officer at the family advocacy group Common Sense Media.
It wasn’t the first time a graphic incident had been captured and shared for all to see—and not even the first time this month. Just a few weeks earlier, a video of the fatal stabbing of a Ukrainian refugee on a train in Charlotte, North Carolina, sparked outrage and spread rapidly across YouTube and other social platforms.
In 2015, a disgruntled former reporter ([link removed] ) for a local news network in Roanoke, Virginia, fatally shot a reporter and a cameraman during a live broadcast. In 2019, a man fatally shot 51 people ([link removed] ) in two mosques in Christchurch, New Zealand, live-streaming his massacre on Facebook Live. During the 2019 Christchurch attack, Facebook reported ([link removed] ) removing 1.5 million videos of the massacre in just 24 hours—yet many versions still circulated widely.
These viral surges aren’t just technical; they shape public memory. Once violent images circulate widely, they embed in culture and politics long after platforms attempt to remove them.
// How platforms manage harmful content
After Kirk’s murder, Bluesky, Meta, Reddit, YouTube, and Discord issued statements ([link removed] ) expressing commitments to removing content that glorifies or promotes violence and adhering to their existing content moderation policies. Roblox ([link removed] ) announced that it removed user-generated game experiences related to the Kirk shooting.
YouTube and Meta said they would remove some of the most graphic content while age-restricting other content for viewers aged 18 and above. And yet, Katie Paul, director of the Tech Transparency Project ([link removed] ) , a nonprofit watchdog organization, could easily find videos of the Kirk shooting on Instagram’s Teen accounts, which are designed to be safer for users 13 to 17 years old.
Content moderation teams have been scaled back in recent years ([link removed] ) . Platforms like X, TikTok, Facebook, and Instagram have reduced or eliminated the work of human moderators. Earlier this year, Meta shifted its strategy ([link removed] ) from paid fact-checkers to user-policed community notes. Many platforms now use AI tools to identify, label, suppress, or fully remove content that violates their content moderation policies. But doing so is harder with videos than it is with still images or text. It takes time for AI tools to be trained on new videos they’ve never encountered before and to pattern-match similar videos that users have tweaked to evade detection techniques.
Katie Harbath, a former Facebook executive, told The Washington Post ([link removed] ) , “It’s not an instantaneous thing where you can hit a button and have it disappear from the internet completely.” Yet she believes platforms have gotten better since the New Zealand live-streaming massacre.
Martin Degeling, a researcher who audits algorithmic systems, agrees. “I don't think it is possible to prevent the initial distribution, but I think platforms can do better in preventing the massive distribution through algorithmic feeds, especially to people that did not specifically search for it,” he said ([link removed] ) .
Preventing distribution through algorithmic feeds would require reversing the exact behavior these platforms encourage: endless scrolling, autoplay video, and the amplification ([link removed] ) of sensationalized content. It would require shifting how the algorithms work, how content moderation protocols flag content, and how platforms determine what’s age-appropriate content.
Utah Governor Chris Cox said on Meet the Press on Sunday ([link removed] ) . “I believe that social media has played a direct role in every single assassination and assassination attempt that we have seen over the last five, six years,” Cox said. “There is no question in my mind — ‘cancer’ probably isn’t a strong enough word. What we have done, especially to our kids, it took us decades to realize how evil these algorithms are.”
// Solutions
On the day of the shooting, traffic to Common Sense Media’s web page titled “Explaining the News to Our Kids ([link removed] ) ” was 20 times higher ([link removed] ) than the day before.
In the absence of more effective content moderation practices and technologies, and when breaking news travels faster than even the most sophisticated detection software and moderators, the burden invariably shifts to individuals. It’s up to parents to have difficult conversations with their children about the types of content they might encounter online ([link removed] ) . It’s up to social media users to disable the autoplay of graphic images ([link removed] ) , disable graphic content, and limit sensitive topics by adjusting their content preferences ([link removed] ) in their settings.
Working upstream, real opportunities to improve Trust & Safety ([link removed] ) lie not just in holding Big Tech accountable, but in shaping the small and medium platforms that are still emerging. Eli Sugarman, tech governance expert and director of special projects at the Hewlett Foundation, agrees. Last year, Project Liberty interviewed Sugarman ([link removed] ) , who offered five insights about the future of Trust & Safety and how to stay one step ahead of the next viral harm.
He argued that the approach to Trust & Safety needs to shift from attempting to detect and moderate content to identifying the actors and behaviors behind the content. Instead of fixating on the content itself, teams are uncovering actors and behaviors across platforms to understand where malicious content may appear next. Otherwise, content moderation is an endless game of whack-a-mole.
Another tool is ROOST ([link removed] ) , a multi-organization initiative that develops, maintains, and distributes open source building blocks and tools that safeguard users and communities. Project Liberty Institute is a founding member of ROOST, which is building tools ([link removed] ) to identify, remove, and report harmful content, while also helping content moderators do their jobs more effectively.
// The impact of violent content
The way people experience events like Kirk’s murder can have lasting effects.
“This is the first time such a widely recognized figure has been murdered in such a public way and spread this way on social media,” said Emerson Brooking ([link removed] ) , the director of strategy at the Digital Forensic Research Lab of the Atlantic Council. “Because of that, I think unfortunately this is a viral moment with tremendous staying power. It will have lasting consequence for American political and civic life.”
How we respond—through policy, technology, and culture—will determine whether these tragedies become the new normal or serve as the catalyst for building a safer digital public square.
// Project Liberty Updates
// Next week is United Nations General Assembly week in New York. Project Liberty was highlighted in an article in Tech Policy Press ([link removed] ) on its role in creating resilient digital infrastructure.
📰 Other notable headlines
// 📘 Encyclopedia Britannica and Merriam-Webster sue Perplexity for copying their definitions, according to an article in The Verge ([link removed] ) . (Paywall).
// 🎮 He made a friend on Roblox. Then their relationship turned sinister, according to an article in The New York Times ([link removed] ) . (Paywall).
// 🇨🇳 A series of corporate leaks show that Chinese technology companies function far more like their Western peers than one might imagine, according to an article in WIRED ([link removed] ) . (Paywall).
// 🇦🇱 An article in Politico ([link removed] ) reported that Albania has become the first country in the world to have an AI minister — not a minister for AI, but a virtual minister made of pixels and code and powered by artificial intelligence. (Free).
// 🤔 An article in The Atlantic ([link removed] ) posed a question all colleges should ask themselves about AI: how far are they willing to go to limit its harms? (Paywall).
// 💵 An article in Ars Technica ([link removed] ) reported that Internet Archive’s big battle with music publishers ends in a settlement. (Free).
Partner news
// Join Metagov’s deliberative tools working groups
Metagov's ([link removed] ) Interoperable Deliberative Tools initiative is advancing open, modular governance through six working groups on data standards, facilitation, ethics, and more. Each group is open to new contributors—learn more here ([link removed] ) .
// BKC Summer Law Program - AI and the Law
The Berkman Klein Center and Harvard Law School Executive Education launched AI and the Law: Navigating the New Legal Landscape ([link removed] ) , a new program exploring AI’s impact on law, policy, and practice. Over two days, participants charted the challenges of a rapidly shifting legal frontier. Sign up ([link removed] ) to learn more about the next iteration of the program.
// Global dialogues reveal shifting trust in AI
The Collective Intelligence Project's ([link removed] ) latest installment of Global Dialogues ([link removed] ) found that 58% of participants trust their AI chatbot to act in their best interests—more than double the trust expressed in elected officials (28%). Nearly 40% believe AI could make better decisions than governments, even as skepticism toward tech companies remains high. Read the full report here ([link removed] ) .
What did you think of today's newsletter?
We'd love to hear your feedback and ideas. Reply to this email.
// Project Liberty builds solutions that help people take back control of their lives in the digital age by reclaiming a voice, choice, and stake in a better internet.
Thank you for reading.
Facebook ([link removed] )
LinkedIn ([link removed] )
Sin título-3_Mesa de trabajo 1 ([link removed] )
Instagram ([link removed] )
Project Liberty footer logo ([link removed] )
10 Hudson Yards, Fl 37,
New York, New York, 10001
Unsubscribe ([link removed] ) Manage Preferences ([link removed] )
© 2025 Project Liberty LLC