Inside a modest office near London Bridge, a small team of engineers and fact-checkers has spent a decade refining AI tools and models to do what most journalists can no longer manage: keep up. The system reads headlines, transcribes broadcasts and scans social media for claims worth verifying. It flags those most likely to mislead or cause harm.
The technology, developed by Full Fact, the United Kingdom’s leading fact-checking charity, is crossing the Atlantic. Ahead of the 2026 U.S. midterm elections, which many misinformation experts expect to be defined by AI-created falsehoods, Full Fact is offering its tools to American newsrooms to help track harmful misinformation at scale.
Founded in 2009, Full Fact began experimenting with technology long before large language models captured global attention. By 2016, its leaders had realized that human fact-checkers could no longer monitor every broadcast or platform for questionable claims on issues of public interest. When Andy Dudfield joined the organization in 2019 to lead its AI work, he found a small team already exploring how automation could help.
Full Fact’s technology grew out of early natural language processing, before today’s generative models. The team began by training systems to identify checkable statements and match them with existing fact checks. In 2019, engineers fine-tuned Google’s BERT — a model built to understand language rather than generate it — to label claim types across news and social media.
With tens of thousands of human annotations, the model learned to triage vast text streams, processing more than 300,000 sentences a day and flagging new claims that reappeared in different wording. In the past two years, larger generative models have joined the mix, estimating potential harm and catching paraphrased versions that old systems would miss. The new systems surface and group likely harmful misinformation, and fact-checkers decide what to investigate and publish.
Users access the tools through a web dashboard that surfaces claims from news, social media, podcasts, video and radio. The interface links to original clips or posts and lets teams search by topic or speaker, see where claims have appeared, and receive alerts when debunked statements resurface. If enabled, the system can prioritize likely harmful claims so editors can decide what to work on first.
“Everything we built in technology is from the point of view of being built by fact-checkers for fact-checkers,” Dudfield told me. “We weren’t trying to work out what technology could do that would be cool. We were trying to find the actual problems we had with fact-checking and see if technology could help.”
That distinction shaped Full Fact AI, now an eight-person team at the center of the charity’s work. The system analyzes a claim’s potential harm by weighing how wrong it is, how believable it sounds and how likely it is to prompt people to act on it.
The harm model builds on research by Peter Cunliffe-Jones, founder of Africa Check and author of “Fake News: What’s the Harm?” In that book, published by the University of Westminster Press, Cunliffe-Jones traces how falsehoods can trigger measurable consequences — from mob attacks on health workers during the 2014 Ebola outbreak to lynchings in India fueled by WhatsApp hoaxes about child kidnappers. Between 2021 and 2025, he developed a model to predict which false claims were most likely to lead to real harm.
“Fact-checkers need to focus on what’s most likely to do real harm, not just what’s wrong,” Cunliffe-Jones told me in an email. Full Fact’s harm-scoring framework applies that model at scale, helping fact-checkers direct limited resources where they matter most.
The tools have been used by more than 40 fact-checking organizations in over 30 countries, including during Nigeria’s 2023 presidential election, where fact-checkers employed them to track and debunk viral claims in real time. The system is “a co-intelligence between human and machine,” said Kate Wilkinson, Full Fact’s senior product manager, who is leading the outreach to U.S. newsrooms.
“Our tools were not designed to replace fact-checkers,” Wilkinson said. “They take off the plate the really time- and resource-intensive task of media monitoring.”
Full Fact has begun inviting U.S. fact-checking desks, from local teams to national networks, to test its AI tools ahead of next year’s elections. Wilkinson called the timing “appropriate,” and said the platform could help newsrooms handle election claims at scale without adding engineering staff. The rollout includes subsidized licenses and short onboarding sessions to help teams integrate the tools quickly into their daily work.
The move arrives as many nonprofit and independent newsrooms face mounting financial strain while experimenting with how AI can help them better serve their audiences. In the United States, Meta ended its third-party fact-checking program in January, forcing several outlets to scale down and prompting nearly a third of organizations accredited by the International Fact-Checking Network to close.
Two weeks ago, Full Fact disclosed that Google had withdrawn its long-running support for the charity’s AI work. The organization said it received more than £1 million from Google last year, either directly or through related funds, all of which have now been discontinued.
Dudfield called the loss significant but not insurmountable. “Technology is not something we do on the side at Full Fact,” he said. “It’s core to what we do.”
In a statement, Chris Morris, Full Fact’s chief executive, said the charity is “urgently refocusing” its fundraising while staying independent. Morris, the BBC’s first on-air fact-checker, also criticized what he described as a political chill from Silicon Valley. “We think Google’s decisions, and those of other big U.S. tech companies, are influenced by the perceived need to please the current U.S. administration,” he wrote. “Full Fact will always be clear: Verifiable facts matter, and the big internet companies have responsibilities when it comes to curtailing the spread of harmful misinformation.”
The funding cuts follow a wider retreat by major tech firms from supporting fact-checking globally. Dudfield said the changing environment has reinforced his team’s mission to use technology to sustain independent verification. “We will carry on using technology to support fact-checking because we believe that is where we can make the biggest difference,” he said.
Full Fact’s expansion comes at a pivotal moment for fact-checking, said Lucas Graves, a journalism professor at the University of Wisconsin-Madison and author of “Deciding What’s True: The Rise of Political Fact-Checking in American Journalism.”
“There’s lots of experimentation with AI happening across the media landscape in the U.S., in newsrooms large and small,” he said. “But what Full Fact has is a purpose-built platform developed over nearly a decade for tracking and prioritizing misinformation — an area where most newsrooms don’t have a tremendous amount of experience.”
Graves called the expansion “well-timed,” adding that the threat from online misinformation is evolving fast while newsroom resources continue to shrink.
Despite shifting politics and declining support for the industry, Dudfield said the goal hasn’t changed. “We’re fact-checkers. We check facts,” he said. The work of exposing falsehoods, he added, remains essential even as pressures on independent journalists grow. The alternative, he said, would cost far more.