From The International Fact-Checking Network <[email protected]>
Subject As Stanford’s Trust and Safety Research conference opens, YouTube caves to House Republican attacks
Date October 9, 2025 1:01 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
[link removed]
[link removed]

In this edition
* Stanford hosts trust and safety summit as YouTube yields to partisan pressure
* India’s fact-checking alliance widens scope to meet new challenges
* AFP wins Africa’s top fact-checking award


** As Stanford’s Trust and Safety Research conference opens, YouTube caves to House Republican attacks
------------------------------------------------------------

By Angie Drobnic Holan (mailto:[email protected]?subject=&body=)

Stanford’s Trust and Safety Research Conference is known as a convening spot for people in academia, law, industry and civil society who work intensively for online safety. This year, the conference took place in a daunting environment: The Trump administration has been cutting funds for academic research and railing against content moderation. Those attacks weren’t mentioned much on the main stage of the conference, but the agenda did show where much of the energy for trust and safety is — and where it isn’t.

The hopeful themes of the conference centered around child safety and using AI tools to improve user experience. Julie Inman Grant, Australia's eSafety Commissioner, keynoted on child safety concerns, noting Australia’s new law that will stop children under age 16 from creating or keeping accounts. It goes into effect in December, and Grant talked about Australia’s work to prepare the public and families.

On AI, Dave Willner spoke about the potential for AI to improve the user experience of having their social media content moderated. (Willner formerly worked at both Facebook and OpenAI.) His talk was of particular interest to fact-checkers in Meta’s Third Party Fact-Checking Program who have answered questions from unhappy users. Willner noted two problems that users experience: that their content was blocked by algorithms (not fact-checkers) for violating community rules when it didn’t; and the lack of a response to their questions about their experiences.

Willner hypothesized that AI’s scaling could improve the accuracy of algorithmic content vetting, so that fewer posts would be mischaracterized as breaking rules. And, he proposed that AI could be used to provide customer service, to communicate to users why their content had been blocked and how it could be unblocked. AI could provide scale when tech platforms don’t want to invest in human response.

Less of the conference’s energy went to discussions of countering disinformation with systemic approaches. To add insult to injury, news broke right before the conference that YouTube was capitulating to an investigation from the U.S. House Judiciary Committee led by Congressman Jim Jordan. YouTube agreed ([link removed]) to reinstate accounts that had violated their guidelines during COVID-19 and after the Jan. 6 riots. (Conspiracy theorist Alex Jones was one of the accounts that attempted ([link removed]) immediate reinstatement, but YouTube said the reinstatement process wasn’t open yet.) And in a jab at fact-checkers, YouTube’s attorneys said in a letter that the platform “has not and will not empower fact-checkers” to label content.

Alexios Mantzarlis of Cornell Tech and Indicator (and former IFCN director) spoke at the conference and headlined Indicator’s weekly briefing ([link removed]) with the subject line, “YouTube bends the knee.”

“On Tuesday, YouTube did what it often does: it took the same approach as Meta on a controversial topic, but with a delay and less of a splash,” he wrote. “In retrospect, I might have to hand it to Mark Zuckerberg for capitulating on camera rather than getting a lawyer to do it for him.”

At the conference, I did speak to team members from teams at Google, TikTok and even X. But primarily I was left to wonder whether fact-checking's future lies in new institutional alliances rather than platform partnerships.

Much of the work of the conference was devoted to academic studies of the trust and safety area, where researchers are doing strong work trying to document the times we’re living through — often without the platform cooperation they once had.

I participated as a co-facilitator with Leticia Bode and Naomi Shiffman for a workshop on “Let's Share,” a framework initiative working to democratize access to publicly available platform data. The goal is to empower more researchers to study how information spreads online, work that will be even more critical as platforms retreat from their own transparency commitments.

The initiative ([link removed]) is coordinated by the Washington-based Knight-Georgetown Policy Institute; more details will be released in November.

In this daunting environment, the research work at Stanford offered some reassurance: serious people are still documenting problems and searching for solutions. But we’ll all need new strategies if platforms continue to abandon the field.

Support the International Fact-Checking Network ([link removed])


** India’s fact-checking alliance widens mission to tackle scams and AI fakes
------------------------------------------------------------

By Enock Nyariki (mailto:[email protected]?subject=&body=)

India’s Misinformation Combat Alliance has rebranded as the Trusted Information Alliance ([link removed]) (TIA), expanding its mission beyond fact-checking to cover digital safety, responsible AI use, and online fraud prevention.

The shift comes amid a surge in scams that cost Indians an estimated ₹7,000 crore (about $770 million) in the first five months of 2025, according to the Ministry of Home Affairs’ cybercrime unit, which projects total losses could top ₹1.2 lakh crore (about $13.2 billion) by year’s end.

“The information ecosystem has grown more complex,” wrote Rakesh Dubbudu, noting that misinformation now includes AI-generated deepfakes, voice clones, and synthetic narratives.

The new alliance brings together media outlets, civil society groups, and technology partners to promote transparency and media literacy. Fact-checking remains central, but TIA’s broader focus includes research, policy engagement, and consumer protection, especially for senior citizens and first-time internet users targeted by fraud.

TIA will host its first Trusted Information Alliance Conference (TIACoN 2025) on Nov. 6, bringing together journalists, researchers, and tech leaders to explore misinformation and trust in the AI age.

ON OUR RADAR
* AFP’s Samad Uthman won ([link removed]) Fact-Check of the Year in the professional category at the African Fact-Checking Awards in Dakar (Oct. 1-2) for exposing an AI impersonation of a Nigerian scientist used to sell a fake heart-disease cure. AFP also won the top prize in 2023 and took silver in 2022 and 2024.

* MediaWise’s Alex Mahadevan shows ([link removed]) OpenAI’s Sora 2 can generate hyperreal misinformation fast, creating a fake nuclear attack alert, a political public freakout, and other clips in under five minutes. In its first three days, The New York Times found ([link removed]) users producing lifelike videos of ballot fraud, crimes, and street explosions despite guardrails.

* Nina Jankowicz — who briefly led the Biden administration’s Disinformation Governance Board — says ([link removed]) House Judiciary Chair Jim Jordan’s own investigation undercuts his claims that Google and YouTube “censored conservatives.” Transcripts from 15 executives show they weren’t coerced and enforced moderation policies on their own. Techdirt’s Mike Masnick writes ([link removed]) that Jordan is misrepresenting a new Alphabet letter saying Biden officials “pressed” YouTube about some COVID-19 posts but that the company kept its policies independent.

* The Poynter Institute, home to the IFCN, has launched an AI Innovation Lab ([link removed]) to expand its work on AI literacy, ethics and disinformation. Led by MediaWise’s Alex Mahadevan, the lab will explore how AI shapes public trust and journalism, train journalists to cover the industry and detect deepfakes, and develop tools to counter falsehoods. I’ll spend part of my time contributing to its disinformation research.

* A new study published in Nature ([link removed]) finds that a single exposure to false information rarely changes behavior. In three preregistered experiments by teams at University College Dublin and University College Cork, participants saw fabricated stories about food contamination or climate change. One-off exposure didn’t affect what they ate or views on climate — though skepticism-driven misinformation led to a small drop in petition signatures.

Have ideas or suggestions for the next issue of Factually? Email us at [email protected] (mailto:[email protected]?subject=&body=) .

[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]

© All rights reserved Poynter Institute 2025
801 Third Street South, St. Petersburg, FL 33701

Was this email forwarded to you? Subscribe to our newsletters ([link removed]) .

If you don't want to receive email updates from Poynter, we understand.
You can change your subscription preferences ([link removed]) or unsubscribe from all Poynter emails ([link removed]) .
Screenshot of the email generated on import

Message Analysis