͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌    ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­

In this edition

  • Stanford hosts trust and safety summit as YouTube yields to partisan pressure

  • India’s fact-checking alliance widens scope to meet new challenges

  • AFP wins Africa’s top fact-checking award



As Stanford’s Trust and Safety Research conference opens, YouTube caves to House Republican attacks


By Angie Drobnic Holan


Stanford’s Trust and Safety Research Conference is known as a convening spot for people in academia, law, industry and civil society who work intensively for online safety. This year, the conference took place in a daunting environment: The Trump administration has been cutting funds for academic research and railing against content moderation. Those attacks weren’t mentioned much on the main stage of the conference, but the agenda did show where much of the energy for trust and safety is — and where it isn’t.


The hopeful themes of the conference centered around child safety and using AI tools to improve user experience. Julie Inman Grant, Australia's eSafety Commissioner, keynoted on child safety concerns, noting Australia’s new law that will stop children under age 16 from creating or keeping accounts. It goes into effect in December, and Grant talked about Australia’s work to prepare the public and families.


On AI, Dave Willner spoke about the potential for AI to improve the user experience of having their social media content moderated. (Willner formerly worked at both Facebook and OpenAI.) His talk was of particular interest to fact-checkers in Meta’s Third Party Fact-Checking Program who have answered questions from unhappy users. Willner noted two problems that users experience: that their content was blocked by algorithms (not fact-checkers) for violating community rules when it didn’t; and the lack of a response to their questions about their experiences.


Willner hypothesized that AI’s scaling could improve the accuracy of algorithmic content vetting, so that fewer posts would be mischaracterized as breaking rules. And, he proposed that AI could be used to provide customer service, to communicate to users why their content had been blocked and how it could be unblocked. AI could provide scale when tech platforms don’t want to invest in human response.  


Less of the conference’s energy went to discussions of countering disinformation with systemic approaches. To add insult to injury, news broke right before the conference that YouTube was capitulating to an investigation from the U.S. House Judiciary Committee led by Congressman Jim Jordan. YouTube agreed to reinstate accounts that had violated their guidelines during COVID-19 and after the Jan. 6 riots. (Conspiracy theorist Alex Jones was one of the accounts that attempted immediate reinstatement, but YouTube said the reinstatement process wasn’t open yet.) And in a jab at fact-checkers, YouTube’s attorneys said in a letter that the platform “has not and will not empower fact-checkers” to label content.


Alexios Mantzarlis of Cornell Tech and Indicator (and former IFCN director) spoke at the conference and headlined Indicator’s weekly briefing with the subject line, “YouTube bends the knee.”


“On Tuesday, YouTube did what it often does: it took the same approach as Meta on a controversial topic, but with a delay and less of a splash,” he wrote. “In retrospect, I might have to hand it to Mark Zuckerberg for capitulating on camera rather than getting a lawyer to do it for him.”


At the conference, I did speak to team members from teams at Google, TikTok and even X. But primarily I was left to wonder whether fact-checking's future lies in new institutional alliances rather than platform partnerships.


Much of the work of the conference was devoted to academic studies of the trust and safety area, where researchers are doing strong work trying to document the times we’re living through — often without the platform cooperation they once had.


I participated as a co-facilitator with Leticia Bode and Naomi Shiffman for a workshop on “Let's Share,” a framework initiative working to democratize access to publicly available platform data. The goal is to empower more researchers to study how information spreads online, work that will be even more critical as platforms retreat from their own transparency commitments.


The initiative is coordinated by the Washington-based Knight-Georgetown Policy Institute; more details will be released in November.



In this daunting environment, the research work at Stanford offered some reassurance: serious people are still documenting problems and searching for solutions. But we’ll all need new strategies if platforms continue to abandon the field.






India’s fact-checking alliance widens mission to tackle scams and AI fakes


By Enock Nyariki

India’s Misinformation Combat Alliance has rebranded as the Trusted Information Alliance (TIA), expanding its mission beyond fact-checking to cover digital safety, responsible AI use, and online fraud prevention.

The shift comes amid a surge in scams that cost Indians an estimated ₹7,000 crore (about $770 million) in the first five months of 2025, according to the Ministry of Home Affairs’ cybercrime unit, which projects total losses could top ₹1.2 lakh crore (about $13.2 billion) by year’s end.

“The information ecosystem has grown more complex,” wrote Rakesh Dubbudu, noting that misinformation now includes AI-generated deepfakes, voice clones, and synthetic narratives.

The new alliance brings together media outlets, civil society groups, and technology partners to promote transparency and media literacy. Fact-checking remains central, but TIA’s broader focus includes research, policy engagement, and consumer protection, especially for senior citizens and first-time internet users targeted by fraud.

TIA will host its first Trusted Information Alliance Conference (TIACoN 2025) on Nov. 6, bringing together journalists, researchers, and tech leaders to explore misinformation and trust in the AI age.

Have ideas or suggestions for the next issue of Factually? Email us at [email protected].