From The International Fact-Checking Network <[email protected]>
Subject The internet is breaking – but it’s not too late to make it better
Date December 18, 2025 2:19 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
[link removed]
[link removed]

In this edition:
* Why the internet feels harder to trust and where efforts to fix it are taking shape
* Europe’s fact-checkers aim to stop falsehoods before they go viral
* The IFCN responds to U.S. visa guidance that could chill fact-checking and journalism worldwide
* On our radar: the Global Fact Check Bot, PolitiFact’s Lie of the Year, Factchequeado investigates Russian propaganda, GlobalFact speaker Steve Levitsky, and more.

(Shutterstock)

The internet is breaking – but it’s not too late to make it better

By Angie D. Holan (mailto:[email protected]?subject=&body=)

Signs are everywhere that the internet is getting worse. Consider just a few recent examples:

Oprah Winfrey warned ([link removed]) fans that she was not hawking pink salt or weight loss gummies on the internet, nor had she blocked tsunami evacuees from an escape route near her Hawaii property. “Beyond me, it’s everywhere,” she said. “A friend received a desperate text, supposedly from her son — turns out, it wasn’t. Another friend heard what sounded exactly like her daughter’s voice asking for money. It wasn’t her daughter — it was AI.”

In Minnesota, Roman Catholic Bishop Robert Barron told followers via YouTube ([link removed]) that he actually hadn’t been summoned to Rome (presumably for discipline), nor was he giving advice on how to remove demons from a toilet. These false claims were shared via AI-generated video. “These are fraudsters,” Barron said. “What they’re doing is making money off these things, because they monetize them through ads.”

In New Jersey, the grieving relatives of a 76-year-old retiree told ([link removed]) Reuters how he’d been invited to New York City by a flirty AI chatbot named “Big sis Billie” — a Meta chatbot that claimed it was real and wanted to meet him. Rushing to catch a train, he fell and hit his head. He died three days later.

I’ve watched these trends closely as director of the International Fact-Checking Network, a group that encourages fact-checking journalism around the world, and I see the public’s growing fear that everything they see on the internet is fake, that the old-fashioned markers of accuracy and authenticity are fading away.

But as bad as it is, there’s still hope: Serious people are still documenting problems and searching for solutions, from scrappy fact-checkers to credentialed academics to members of Congress. We don’t have to let the internet sink into the clutches of fraudsters, con artists and propaganda campaigns. Increasingly, there are signs of a common ground for people to fight back.

How we got here is worth a review. Since the early 2000s, a handful of huge, U.S.-based companies have come to dominate today’s internet: Meta, Google, X, Apple, Amazon, Microsoft and OpenAI. Not long ago, platforms regularly stopped fraud from running wild when faced with bad publicity and public outrage, through a variety of moderation tools and policies.

The environment shifted dramatically after President Donald Trump’s election. While Trump and his allies framed their pressure as fighting censorship, platforms were already reconsidering moderation costs and exposure. Earlier this year, Meta announced it was ending its third-party fact-checking program in the United States — a program that had allowed fact-checking organizations to identify false claims and label them (not take them down). The company claimed it was promoting “free expression,” but the practical effect was removing one of its substantial checks on viral misinformation. Trump was later asked if Meta’s Mark Zuckerberg ended the program because Trump threatened him; Trump said ([link removed]) , “Probably.”

These dynamics are also part of U.S. foreign policy now; the Trump administration is pressuring foreign governments to weaken their tech regulations. Brazil provides a clear example. Trump imposed high tariffs on the country even though Brazil imports more from the U.S. than it exports. He cited two reasons: opposition to Brazil’s prosecution of his political ally Jair Bolsonaro — a former president facing charges related to his supporters’ use of social media to spread election fraud claims after his 2022 loss — and opposition to Brazil’s court orders requiring social media companies to remove certain content. The administration has threatened Brazilian Supreme Court Justice Alexandre de Moraes with sanctions typically reserved for human rights abusers.

Other democracies are targets, too. The State Department has suggested it might restrict travel visas for European Union officials behind tech regulations, and it recently instructed consular officers to deny visas to individuals who have worked in fact-checking, content moderation and trust and safety. (The IFCN issued a statement ([link removed]) objecting to the guidance.) In South Korea, the administration is opposing new regulations on tech platform monopolies. Trump posted in September that he would oppose “any and all regulations on U.S. tech companies from abroad.” These disputes differ from traditional trade conflicts over commodities — instead, the administration is using economic pressure to prevent democratic governments from regulating companies operating within their borders.

Back in the U.S., the tech platforms’ retreat on moderation and safety is already generating consequences they may not have anticipated. Tech brands overall received mixed ratings in a recent Axios Harris poll ([link removed]) that measured corporate reputation, with Meta and X performing exceptionally poorly. Political candidates in recent elections campaigned and won on anti-tech messaging, including Virginia’s Abigail Spanberger pledging to rein in data centers, New Jersey’s Mikie Sherrill promoting child safety online, and New York City’s Zohran Mamdani criticizing algorithmic ticket pricing for World Cup soccer games.

There’s even been a breakout of bipartisan agreement in the halls of Congress, where many ideas for curbing tech excess are gaining sponsors from both parties. Senators Josh Hawley and Richard Durbin, for example, think people should be able to sue AI companies if their products cause harm. They just introduced ([link removed]) the bipartisan Aligning Incentives for Leadership, Excellence and Advancement in Development (AI LEAD) Act in September. “When a defective toy car breaks and injures a child, parents can sue the maker. Why should AI be treated any differently?” said Hawley in announcing the legislation.

How about requiring platforms to remove deepfake revenge porn? That one actually became law this year with Sen. Amy Klobuchar’s TAKE IT DOWN Act, co-sponsored by Republican Sen. Ted Cruz. “Passing the TAKE IT DOWN Act into law is a major victory for victims of online abuse – giving people legal protections and tools for when their intimate images, including deepfakes, are shared without their consent, and enabling law enforcement to hold perpetrators accountable,” Klobuchar said. It goes into effect in 2026.

Sen. Mark Warner reintroduced ([link removed]) his ACCESS Act in May to allow people to take their social media network to another platform, making it easier for newer and more user-friendly platforms to compete with the entrenched giants. Hawley and Sen. Richard Blumenthal co-sponsored it. The idea is to make it easier for start-ups to challenge Big Tech by giving users more control over their own data.

Research into the dynamics of algorithms is also continuing, and in new and compelling directions. For years, when people have asked me how much misinformation is on social media, I’ve had to give the dispiriting answer that there was no real way to measure it. But now, thanks to the SIMODS project ([link removed]) (​​Structural Indicators to Monitor Online Disinformation Scientifically), I can give a much more informative answer. The project used AI to scientifically sample posts on the different platforms, then human reviewers categorized the posts to determine how much inaccurate content appeared in the sample. The results were fascinating: Higher percentages of misinformation were found on TikTok (20%) Facebook (13%) and X (13%), followed by YouTube (8%), Instagram (8%) and LinkedIn (2%).

Then the researchers drilled down to examine whether misinformation was more likely to go viral. In other words, did algorithms reward false content over true content? It turns out that accounts that repeatedly shared misinformation attracted more engagement per post than credible accounts on all platforms, except LinkedIn. The ratios for viral disinformation were highest on YouTube and Facebook, where bad actors were able to generate eight times and seven times normal engagement, respectively; it was five times as likely on Instagram and X, and two times as likely on TikTok.

The researchers were only funded to study four countries (France, Spain, Poland, Slovakia) in Europe, but the method could readily be repeated for the United States, and the implications are important and deserve further study. Looking at the above numbers, it makes me wonder if a platform like TikTok is more permissive in letting people post inaccurate content, but does a much better job than its competitors to keep it from going viral. There’s also implications to be teased out about LinkedIn — does the way it favors real people or companies posting under their own names make for more accurate postings? It certainly seems that way.

The point is that these companies can improve, and independent researchers can help suggest reforms for the benefit of everyday users. Lately, platforms have sold data to advertisers, data brokers and artificial intelligence systems while restricting researcher access to that same data. To reverse that trend, the Knight-Georgetown Institute spearheaded a project called “Better Access ([link removed]) ,” to which I’ve contributed. The project sets out uniform, cross-platform standards to give researchers access to high-profile public postings. With better access, researchers could tackle all sorts of projects, like studying how health care scams spread, or measuring whether platforms actually enforce their own policies equally. These studies are nearly impossible now, but crucial for accountability.

Finally, while Meta stopped the fact-checking program in the United States, it appears to be continuing in the rest of the world. Regulatory efforts in Europe, Brazil and South Korea aren’t going away; the proposals are underpinned by the idea that fraud and hoaxes cannot be effectively countered only by urging people to be more careful and to think before they share. Individual acts of prevention aren’t scalable against the power of internet algorithms.

All these efforts suggest a strong desire for meaningful change, so that we don’t head toward a world that’s indifferent to reality itself — a world where parents have to wonder if that’s really their child calling for help, where clergy spend as much time debunking AI versions of themselves as they do serving their flock, or where a lonely retiree can be lured to his death by a chatbot.

The European research revealed something crucial: Their study suggests that design choices matter enormously. By requiring real identities and favoring authentic professional profiles, for example, LinkedIn created an environment where accuracy is a norm. What other lessons could we learn with more study?

The internet isn’t broken by nature. It’s broken by choices that prioritize viral engagement over accuracy, that reward anonymous bad actors, that treat disinformation as an acceptable cost of doing business. Different decisions could produce dramatically different results. These aren’t unsolvable problems. They’re choices, and we can insist on better ones.
Share this story via Poynter.org ([link removed])


** Europe’s fact-checkers aim to stop falsehoods before they go viral
------------------------------------------------------------

By Enock Nyariki (mailto:[email protected]?subject=&body=)

False claims on TikTok, YouTube Shorts and Instagram Reels often outrun fact-checkers before anyone can respond. By the time a debunk appears, the harmful narrative has already crossed borders, languages and platforms.

A coalition of European fact-checkers has built a new response. Launched in November by the European Fact-Checking Standards Network (EFCSN), "Prebunking at Scale ([link removed]) " uses AI to spot misleading narratives while they are still forming. The goal is to give fact-checkers an early warning system to start working on research and to publish prebunks before falsehoods take hold.

The tool scans publicly available short-form video and converts speech and on-screen text into searchable claims. It groups those claims into narratives that can be tracked across platforms and languages. When a narrative begins to surge, fact-checkers receive alerts showing where it is spreading, in which languages and how quickly.

"We wanted something closer to a weather forecast than a fire alarm," said Tori Zopf, the project manager at EFCSN. "Not just what's already burning, but what's likely to hit next."

Full Fact in the U.K. and Maldita.es in Spain are building the technology together. Full Fact handles speech-to-text and claim extraction. Maldita leads the narrative clustering and risk scoring that help editors decide what to tackle first. The tool does not judge whether a claim is true or false. That decision stays with human fact-checkers.

The focus on short-form video is deliberate. Platforms like TikTok and YouTube Shorts have become major vectors for falsehoods on public interest issues, but many fact-checking organizations struggle to monitor them at scale. "It's where a lot of narratives are taking off first," Zopf said, "and where fact-checkers have had the least visibility."

The project tracks climate, conflicts, the European Union, health and migration across five European languages, with plans to reach 25 by the end of its three-year run. The network has also developed a shared prebunking methodology to guide how members respond once alerts appear.

Support journalism, truth and democracy ([link removed])

ON OUR RADAR
* The International Fact-Checking Network issued a statement ([link removed]) expressing deep concern over new U.S. State Department visa directives targeting people who have worked in fact-checking, content moderation, and trust and safety. The IFCN said fact-checking is journalism protected by the First Amendment, and that its 170-plus signatories in more than 80 countries add information to the public record rather than remove content. The policy risks chilling journalism worldwide and sidelining professionals who protect the public from exploitation, fraud, and coordinated harassment.
* PolitiFact has named 2025 the "Year of the Lies," ([link removed]) arguing that the volume of falsehoods, amplified by AI-generated content and fabricated chatbot answers, has made its usual Lie of the Year format feel insufficient. Editor-in-chief Katie Sanders wrote that PolitiFact will recalibrate by focusing less on offenders and more on people hurt by false claims, starting with a three-part series on a North Dakota farmer, a South Florida pediatrician, and two brothers deported after an ICE check-in.
* Reuters reports ([link removed]) that internal Meta documents estimated scam ads and other banned content made up about 19% of the company’s China ad revenue in 2024 – more than $3 billion. The investigation says Meta cut the share to about 9% during a 2024 crackdown, then paused the China-focused effort after concerns about “revenue impact,” and violations rebounded to about 16% by mid-2025, hitting users worldwide.

* Factchequeado reviewed 4,539 articles published by Mexico’s Journalists’ Club from March 2020 to Sept. 2025 and documented ([link removed]) a sustained pipeline for Russian propaganda, including RT workshops and awards for international disinformers. Since April 2025, 72% of the club’s website content came from Russian and Cuban state media.
* Community Notes on X cut the spread of misleading posts, with reposts down about 46%, likes about 44%, replies about 22% and views about 14% once a note was attached, according to a peer-reviewed study that tracked 40,078 posts ([link removed]) where notes were proposed. The researchers at University of Washington and Stanford found the biggest drops came when notes appeared quickly after a post was created.

* In a Foreign Affairs essay ([link removed]) , Harvard political scientist Steven Levitsky and co-authors argue that the United States slid into “competitive authoritarianism” in 2025, with a second Trump administration using state power to target critics, shield allies, and pressure the media, universities, and civil society. Levitsky, a keynote speaker at GlobalFact 2024 ([link removed]) in Sarajevo, says the turn is rapid but reversible if opponents keep contesting power through elections, courts, and sustained civic action.

* Meta's Oversight Board has agreed ([link removed]) to advise on the global expansion of Community Notes, the crowd-sourced labeling system that replaced third-party fact-checking in the United States. Meta asked the Board to assess risks tied to press freedom, digital literacy, and political conditions, and to advise on whether some countries should be excluded if the consensus-based algorithm cannot function as intended.

* Germany has accused Russia of running a coordinated disinformation campaign to destabilize the country’s February 2025 federal election, the BBC reports ([link removed]) . German officials said the effort included manipulated videos targeting senior political candidates and was linked to the same GRU-backed network blamed for an August 2024 cyberattack on German air safety systems.

* Disinformation analysis startup Repsense has raised ([link removed]) €2 million in seed funding to scale its Havel platform, which tracks the spread and impact of information across social and traditional media. The Vilnius-based company said the funding will deepen its AI-powered analysis of online narratives, including short-form and algorithm-driven content.

* The Global Fact-Check Chatbot has launched ([link removed]) with 38 founding members from fact-checking organizations worldwide. First discussed at GlobalFact 11 in Bosnia in 2024, the multilingual AI tool aims to speed up fact-checking and give the public a way to verify claims in more than 50 languages.

Have ideas or suggestions for the next issue of Factually? Email us at [email protected] (mailto:[email protected]?subject=&body=) .

Happy holidays!

[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]

© All rights reserved Poynter Institute 2025
801 Third Street South, St. Petersburg, FL 33701

Was this email forwarded to you? Subscribe to our newsletters ([link removed]) .

If you don't want to receive email updates from Poynter, we understand.
You can change your subscription preferences ([link removed]) or unsubscribe from all Poynter emails ([link removed]) .
Screenshot of the email generated on import

Message Analysis