As Stanford’s Trust and Safety Research conference opens, YouTube caves to House Republican attacks
By Angie Drobnic Holan
Stanford’s Trust and Safety Research Conference is known as a convening spot for people in academia, law, industry and civil society who work intensively for online safety. This year, the conference took place in a daunting environment: The Trump administration has been cutting funds for academic research and railing against content moderation. Those attacks weren’t mentioned much on the main stage of the conference, but the agenda did show where much of the energy for trust and safety is — and where it isn’t.
The hopeful themes of the conference centered around child safety and using AI tools to improve user experience. Julie Inman Grant, Australia's eSafety Commissioner, keynoted on child safety concerns, noting Australia’s new law that will stop children under age 16 from creating or keeping accounts. It goes into effect in December, and Grant talked about Australia’s work to prepare the public and families.
On AI, Dave Willner spoke about the potential for AI to improve the user experience of having their social media content moderated. (Willner formerly worked at both Facebook and OpenAI.) His talk was of particular interest to fact-checkers in Meta’s Third Party Fact-Checking Program who have answered questions from unhappy users. Willner noted two problems that users experience: that their content was blocked by algorithms (not fact-checkers) for violating community rules when it didn’t; and the lack of a response to their questions about their experiences.
Willner hypothesized that AI’s scaling could improve the accuracy of algorithmic content vetting, so that fewer posts would be mischaracterized as breaking rules. And, he proposed that AI could be used to provide customer service, to communicate to users why their content had been blocked and how it could be unblocked. AI could provide scale when tech platforms don’t want to invest in human response.
Less of the conference’s energy went to discussions of countering disinformation with systemic approaches. To add insult to injury, news broke right before the conference that YouTube was capitulating to an investigation from the U.S. House Judiciary Committee led by Congressman Jim Jordan. YouTube agreed to reinstate accounts that had violated their guidelines during COVID-19 and after the Jan. 6 riots. (Conspiracy theorist Alex Jones was one of the accounts that attempted immediate reinstatement, but YouTube said the reinstatement process wasn’t open yet.) And in a jab at fact-checkers, YouTube’s attorneys said in a letter that the platform “has not and will not empower fact-checkers” to label content.
Alexios Mantzarlis of Cornell Tech and Indicator (and former IFCN director) spoke at the conference and headlined Indicator’s weekly briefing with the subject line, “YouTube bends the knee.”
“On Tuesday, YouTube did what it often does: it took the same approach as Meta on a controversial topic, but with a delay and less of a splash,” he wrote. “In retrospect, I might have to hand it to Mark Zuckerberg for capitulating on camera rather than getting a lawyer to do it for him.”
At the conference, I did speak to team members from teams at Google, TikTok and even X. But primarily I was left to wonder whether fact-checking's future lies in new institutional alliances rather than platform partnerships.
Much of the work of the conference was devoted to academic studies of the trust and safety area, where researchers are doing strong work trying to document the times we’re living through — often without the platform cooperation they once had.
I participated as a co-facilitator with Leticia Bode and Naomi Shiffman for a workshop on “Let's Share,” a framework initiative working to democratize access to publicly available platform data. The goal is to empower more researchers to study how information spreads online, work that will be even more critical as platforms retreat from their own transparency commitments.
The initiative is coordinated by the Washington-based Knight-Georgetown Policy Institute; more details will be released in November.
In this daunting environment, the research work at Stanford offered some reassurance: serious people are still documenting problems and searching for solutions. But we’ll all need new strategies if platforms continue to abandon the field.