From The International Fact-Checking Network <[email protected]>
Subject The UK’s fact-checkers are sending their AI to help Americans cover elections
Date November 6, 2025 2:01 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
[link removed]
[link removed]

By Enock Nyariki (mailto:[email protected]?subject=&body=)

Edited by Angie D. Holan (mailto:[email protected]?subject=&body=)

In this edition:
* Full Fact’s AI heads to U.S. newsrooms ahead of midterm elections
* IFCN launches SUSTAIN grants to help fact-checkers weather funding shifts


** The UK’s fact-checkers are sending their AI to help Americans cover elections
------------------------------------------------------------

From left: Full Fact’s head of AI Andy Dudfield, senior product manager Kate Wilkinson and senior data scientist David Corney. (Courtesy of Full Fact)

Inside a modest office near London Bridge, a small team of engineers and fact-checkers has spent a decade refining AI tools and models to do what most journalists can no longer manage: keep up. The system reads headlines, transcribes broadcasts and scans social media for claims worth verifying. It flags those most likely to mislead or cause harm.

The technology, developed by Full Fact, the United Kingdom’s leading fact-checking charity, is crossing the Atlantic. Ahead of the 2026 U.S. midterm elections, which many misinformation experts expect to be defined by AI-created falsehoods, Full Fact is offering its tools to American newsrooms to help track harmful misinformation at scale.

Founded in 2009, Full Fact began experimenting with technology long before large language models captured global attention. By 2016, its leaders had realized that human fact-checkers could no longer monitor every broadcast or platform for questionable claims on issues of public interest. When Andy Dudfield joined the organization in 2019 to lead its AI work, he found a small team already exploring ([link removed]) how automation could help.

Full Fact’s technology grew out of early natural language processing, before today’s generative models. The team began by training systems to identify checkable statements and match them with existing fact checks. In 2019, engineers fine-tuned Google’s BERT — a model built to understand language rather than generate it — to label claim types across news and social media.

With tens of thousands of human annotations, the model learned to triage vast text streams, processing more than 300,000 sentences a day and flagging new claims that reappeared in different wording. In the past two years, larger generative models have joined the mix, estimating potential harm and catching paraphrased versions that old systems would miss. The new systems surface and group likely harmful misinformation, and fact-checkers decide what to investigate and publish.

Users access the tools through a web dashboard that surfaces claims from news, social media, podcasts, video and radio. The interface links to original clips or posts and lets teams search by topic or speaker, see where claims have appeared, and receive alerts when debunked statements resurface. If enabled, the system can prioritize likely harmful claims so editors can decide what to work on first.

“Everything we built in technology is from the point of view of being built by fact-checkers for fact-checkers,” Dudfield told me. “We weren’t trying to work out what technology could do that would be cool. We were trying to find the actual problems we had with fact-checking and see if technology could help.”

That distinction shaped Full Fact AI, now an eight-person team at the center of the charity’s work. The system analyzes a claim’s potential harm by weighing how wrong it is, how believable it sounds and how likely it is to prompt people to act on it.

The harm model builds on research by Peter Cunliffe-Jones, founder of Africa Check and author of “Fake News: What’s the Harm?” In that book, published by the University of Westminster Press, Cunliffe-Jones traces how falsehoods can trigger measurable consequences — from mob attacks on health workers during the 2014 Ebola outbreak to lynchings in India fueled by WhatsApp hoaxes about child kidnappers. Between 2021 and 2025, he developed a model to predict which false claims were most likely to lead to real harm.

“Fact-checkers need to focus on what’s most likely to do real harm, not just what’s wrong,” Cunliffe-Jones told me in an email. Full Fact’s harm-scoring framework applies that model at scale, helping fact-checkers direct limited resources where they matter most.

The tools have been used by more than 40 fact-checking organizations in over 30 countries, including during Nigeria’s 2023 presidential election, where fact-checkers employed them to track and debunk viral claims in real time ([link removed]) . The system is “a co-intelligence between human and machine,” said Kate Wilkinson, Full Fact’s senior product manager, who is leading the outreach to U.S. newsrooms.

“Our tools were not designed to replace fact-checkers,” Wilkinson said. “They take off the plate the really time- and resource-intensive task of media monitoring.”

Full Fact has begun inviting U.S. fact-checking desks, from local teams to national networks, to test its AI tools ahead of next year’s elections. Wilkinson called the timing “appropriate,” and said the platform could help newsrooms handle election claims at scale without adding engineering staff. The rollout includes subsidized licenses and short onboarding sessions to help teams integrate the tools quickly into their daily work.

The move arrives as many nonprofit and independent newsrooms face mounting financial strain while experimenting with how AI can help them better serve their audiences. In the United States, Meta ended ([link removed]) its third-party fact-checking program in January, forcing several outlets to scale down and prompting nearly a third of organizations accredited by the International Fact-Checking Network to close.

Two weeks ago, Full Fact disclosed that Google had withdrawn its long-running support for the charity’s AI work. The organization said it received more than £1 million from Google last year, either directly or through related funds, all of which have now been discontinued.

Dudfield called the loss significant but not insurmountable. “Technology is not something we do on the side at Full Fact,” he said. “It’s core to what we do.”

In a statement, Chris Morris, Full Fact’s chief executive, said the charity is “urgently refocusing” its fundraising while staying independent. Morris, the BBC’s first on-air fact-checker, also criticized what he described as a political chill from Silicon Valley. “We think Google’s decisions, and those of other big U.S. tech companies, are influenced by the perceived need to please the current U.S. administration,” he wrote ([link removed]) . “Full Fact will always be clear: Verifiable facts matter, and the big internet companies have responsibilities when it comes to curtailing the spread of harmful misinformation.”

The funding cuts follow a wider retreat by major tech firms ([link removed]) from supporting fact-checking globally. Dudfield said the changing environment has reinforced his team’s mission to use technology to sustain independent verification. “We will carry on using technology to support fact-checking because we believe that is where we can make the biggest difference,” he said.

Full Fact’s expansion comes at a pivotal moment for fact-checking, said Lucas Graves, a journalism professor at the University of Wisconsin-Madison and author of “Deciding What’s True: The Rise of Political Fact-Checking in American Journalism.”

“There’s lots of experimentation with AI happening across the media landscape in the U.S., in newsrooms large and small,” he said. “But what Full Fact has is a purpose-built platform developed over nearly a decade for tracking and prioritizing misinformation — an area where most newsrooms don’t have a tremendous amount of experience.”

Graves called the expansion “well-timed,” adding that the threat from online misinformation is evolving fast while newsroom resources continue to shrink.

Despite shifting politics and declining support for the industry, Dudfield said the goal hasn’t changed. “We’re fact-checkers. We check facts,” he said. The work of exposing falsehoods, he added, remains essential even as pressures on independent journalists grow. The alternative, he said, would cost far more.

Support the International Fact-Checking Network ([link removed])


** IFCN launches SUSTAIN grants to help fact-checkers weather funding shifts
------------------------------------------------------------

The International Fact-Checking Network launched SUSTAIN ([link removed]) today, a new round of the Global Fact Check Fund designed to help fact-checking organizations maintain capacity and build long-term stability.

The fund awards $30,000 grants to IFCN signatories for staffing, training, fundraising, revenue diversification, or tools that strengthen financial resilience. Unlike past project-based rounds, SUSTAIN provides bridge funding to keep operations steady during transitions in platform partnerships and funding models.

Up to 25 grants will be awarded in this first round, with applications open from Nov. 6 to Dec. 4, 2025. The IFCN plans to distribute $3 million through the Global Fact Check Fund over the next two years.

ON OUR RADAR
* The Global Investigative Journalism Network highlights ([link removed]) the Belarusian Investigative Center’s win of a Free Media Award from Norway’s Fritt Ord Foundation for its reporting from exile. The outlet, whose fact-checking unit is an IFCN member, has uncovered billion-dollar corruption schemes linked to the Lukashenka regime and helped trigger about 90 EU and U.S. sanctions.
* The IFCN will be attending the Global Investigative Journalism Network conference ([link removed]) in Malaysia on Nov. 20 to 24, to support fact-checking in Asia and to seek support for our international community of fact-checkers. If you’re going to be at the conference, please reach out to IFCN director Angie Drobnic Holan (mailto:[email protected]) about attending an IFCN side event ([link removed]) .
* Lupa founder Cris Tardaguila recently discussed the rise and conviction of former Brazil President Jair Bolsonaro, the impact of misinformation, and the growing influence of China and Russia in Brazil with Jacob Law on his podcast The Rip Current ([link removed]) .
* Nieman Lab reports ([link removed]) that Russia is expanding its influence by training journalists and setting up imitation fact-checking networks to spread state propaganda. At the Disinfo2025 conference in Ljubljana, organized by EU Disinfo Lab, experts detailed how RT’s training schemes and other initiatives are reaching Africa and Serbia under the guise of media development.
* New research ([link removed]) from the European Broadcasting Union (EBU) and BBC finds that leading AI assistants misrepresent news content in nearly half their responses. The study analyzed 3,000 answers across 14 languages from ChatGPT, Copilot, Gemini, and Perplexity. Forty-five percent contained at least one major issue, and about a third had sourcing problems, with Gemini showing the highest rate. The EBU warned that as more people turn to AI for news, “when people don’t know what to trust, they end up trusting nothing at all.”
* More than 110 journalism and civil society groups – including the International Fact-Checking Network and the European Fact-Checking Standards Network – have urged the European Union to double its investment in independent media as it drafts its next seven-year budget. In a joint statement ([link removed]) coordinated by major press freedom groups, they argue that journalism and public-interest information are “democratic infrastructure” and vital to Europe’s resilience against disinformation and authoritarian influence. The signatories are calling for long-term, flexible funding through new EU programs that treat media support as a core element of democracy, not a subsidy.
* Delfi’s fact-checking unit, Melo Detektorius, is pushing back ([link removed]) against Kremlin-linked and AI-driven falsehoods from its newsroom in Vilnius. The IFCN and EFCSN member has verified more than 3,000 claims in Lithuanian and Russian, trained citizens to spot manipulation before it spreads, and built public trust through prebunking and media literacy work. Despite harassment from hostile actors, the team continues its verification mission as Lithuania prepares to host GlobalFact 2026.

Have ideas or suggestions for the next issue of Factually? Email us at [email protected] (mailto:[email protected]?subject=&body=) .

[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]

© All rights reserved Poynter Institute 2025
801 Third Street South, St. Petersburg, FL 33701

Was this email forwarded to you? Subscribe to our newsletters ([link removed]) .

If you don't want to receive email updates from Poynter, we understand.
You can change your subscription preferences ([link removed]) or unsubscribe from all Poynter emails ([link removed]) .
Screenshot of the email generated on import

Message Analysis