͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌     ͏ ‌    ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­

In this edition:

  • Why the internet feels harder to trust and where efforts to fix it are taking shape

  • Europe’s fact-checkers aim to stop falsehoods before they go viral

  • The IFCN responds to U.S. visa guidance that could chill fact-checking and journalism worldwide

  • On our radar: the Global Fact Check Bot, PolitiFact’s Lie of the Year, Factchequeado investigates Russian propaganda, GlobalFact speaker Steve Levitsky, and more.


(Shutterstock)



The internet is breaking – but it’s not too late to make it better


By Angie D. Holan



Signs are everywhere that the internet is getting worse. Consider just a few recent examples:

Oprah Winfrey warned fans that she was not hawking pink salt or weight loss gummies on the internet, nor had she blocked tsunami evacuees from an escape route near her Hawaii property. “Beyond me, it’s everywhere,” she said. “A friend received a desperate text, supposedly from her son — turns out, it wasn’t. Another friend heard what sounded exactly like her daughter’s voice asking for money. It wasn’t her daughter — it was AI.”

In Minnesota, Roman Catholic Bishop Robert Barron told followers via YouTube that he actually hadn’t been summoned to Rome (presumably for discipline), nor was he giving advice on how to remove demons from a toilet. These false claims were shared via AI-generated video. “These are fraudsters,” Barron said. “What they’re doing is making money off these things, because they monetize them through ads.”

In New Jersey, the grieving relatives of a 76-year-old retiree told Reuters how he’d been invited to New York City by a flirty AI chatbot named “Big sis Billie” — a Meta chatbot that claimed it was real and wanted to meet him. Rushing to catch a train, he fell and hit his head. He died three days later.

I’ve watched these trends closely as director of the International Fact-Checking Network, a group that encourages fact-checking journalism around the world, and I see the public’s growing fear that everything they see on the internet is fake, that the old-fashioned markers of accuracy and authenticity are fading away.  

But as bad as it is, there’s still hope: Serious people are still documenting problems and searching for solutions, from scrappy fact-checkers to credentialed academics to members of Congress. We don’t have to let the internet sink into the clutches of fraudsters, con artists and propaganda campaigns. Increasingly, there are signs of a common ground for people to fight back.

How we got here is worth a review. Since the early 2000s, a handful of huge, U.S.-based companies have come to dominate today’s internet: Meta, Google, X, Apple, Amazon, Microsoft and OpenAI. Not long ago, platforms regularly stopped fraud from running wild when faced with bad publicity and public outrage, through a variety of moderation tools and policies.

The environment shifted dramatically after President Donald Trump’s election. While Trump and his allies framed their pressure as fighting censorship, platforms were already reconsidering moderation costs and exposure. Earlier this year, Meta announced it was ending its third-party fact-checking program in the United States — a program that had allowed fact-checking organizations to identify false claims and label them (not take them down). The company claimed it was promoting “free expression,” but the practical effect was removing one of its substantial checks on viral misinformation. Trump was later asked if Meta’s Mark Zuckerberg ended the program because Trump threatened him; Trump said, “Probably.”

These dynamics are also part of U.S. foreign policy now; the Trump administration is pressuring foreign governments to weaken their tech regulations. Brazil provides a clear example. Trump imposed high tariffs on the country even though Brazil imports more from the U.S. than it exports. He cited two reasons: opposition to Brazil’s prosecution of his political ally Jair Bolsonaro — a former president facing charges related to his supporters’ use of social media to spread election fraud claims after his 2022 loss — and opposition to Brazil’s court orders requiring social media companies to remove certain content. The administration has threatened Brazilian Supreme Court Justice Alexandre de Moraes with sanctions typically reserved for human rights abusers.

Other democracies are targets, too. The State Department has suggested it might restrict travel visas for European Union officials behind tech regulations, and it recently instructed consular officers to deny visas to individuals who have worked in fact-checking, content moderation and trust and safety. (The IFCN issued a statement objecting to the guidance.) In South Korea, the administration is opposing new regulations on tech platform monopolies. Trump posted in September that he would oppose “any and all regulations on U.S. tech companies from abroad.” These disputes differ from traditional trade conflicts over commodities — instead, the administration is using economic pressure to prevent democratic governments from regulating companies operating within their borders.

Back in the U.S., the tech platforms’ retreat on moderation and safety is already generating consequences they may not have anticipated. Tech brands overall received mixed ratings in a recent Axios Harris poll that measured corporate reputation, with Meta and X performing exceptionally poorly. Political candidates in recent elections campaigned and won on anti-tech messaging, including Virginia’s Abigail Spanberger pledging to rein in data centers, New Jersey’s Mikie Sherrill promoting child safety online, and New York City’s Zohran Mamdani criticizing algorithmic ticket pricing for World Cup soccer games.

There’s even been a breakout of bipartisan agreement in the halls of Congress, where many ideas for curbing tech excess are gaining sponsors from both parties. Senators Josh Hawley and Richard Durbin, for example, think people should be able to sue AI companies if their products cause harm. They just introduced the bipartisan Aligning Incentives for Leadership, Excellence and Advancement in Development (AI LEAD) Act in September. “When a defective toy car breaks and injures a child, parents can sue the maker. Why should AI be treated any differently?” said Hawley in announcing the legislation.

How about requiring platforms to remove deepfake revenge porn? That one actually became law this year with Sen. Amy Klobuchar’s TAKE IT DOWN Act, co-sponsored by Republican Sen. Ted Cruz. “Passing the TAKE IT DOWN Act into law is a major victory for victims of online abuse – giving people legal protections and tools for when their intimate images, including deepfakes, are shared without their consent, and enabling law enforcement to hold perpetrators accountable,” Klobuchar said. It goes into effect in 2026.

Sen. Mark Warner reintroduced his ACCESS Act in May to allow people to take their social media network to another platform, making it easier for newer and more user-friendly platforms to compete with the entrenched giants. Hawley and Sen. Richard Blumenthal co-sponsored it. The idea is to make it easier for start-ups to challenge Big Tech by giving users more control over their own data.

Research into the dynamics of algorithms is also continuing, and in new and compelling directions. For years, when people have asked me how much misinformation is on social media, I’ve had to give the dispiriting answer that there was no real way to measure it. But now, thanks to the SIMODS project (​​Structural Indicators to Monitor Online Disinformation Scientifically), I can give a much more informative answer. The project used AI to scientifically sample posts on the different platforms, then human reviewers categorized the posts to determine how much inaccurate content appeared in the sample. The results were fascinating: Higher percentages of misinformation were found on TikTok (20%) Facebook (13%) and X (13%), followed by YouTube (8%), Instagram (8%) and LinkedIn (2%).

Then the researchers drilled down to examine whether misinformation was more likely to go viral. In other words, did algorithms reward false content over true content? It turns out that accounts that repeatedly shared misinformation attracted more engagement per post than credible accounts on all platforms, except LinkedIn. The ratios for viral disinformation were highest on YouTube and Facebook, where bad actors were able to generate eight times and seven times normal engagement, respectively; it was five times as likely on Instagram and X, and two times as likely on TikTok.

The researchers were only funded to study four countries (France, Spain, Poland, Slovakia) in Europe, but the method could readily be repeated for the United States, and the implications are important and deserve further study. Looking at the above numbers, it makes me wonder if a platform like TikTok is more permissive in letting people post inaccurate content, but does a much better job than its competitors to keep it from going viral. There’s also implications to be teased out about LinkedIn — does the way it favors real people or companies posting under their own names make for more accurate postings? It certainly seems that way.

The point is that these companies can improve, and independent researchers can help suggest reforms for the benefit of everyday users. Lately, platforms have sold data to advertisers, data brokers and artificial intelligence systems while restricting researcher access to that same data. To reverse that trend, the Knight-Georgetown Institute spearheaded a project called “Better Access,” to which I’ve contributed. The project sets out uniform, cross-platform standards to give researchers access to high-profile public postings. With better access, researchers could tackle all sorts of projects, like studying how health care scams spread, or measuring whether platforms actually enforce their own policies equally. These studies are nearly impossible now, but crucial for accountability.

Finally, while Meta stopped the fact-checking program in the United States, it appears to be continuing in the rest of the world. Regulatory efforts in Europe, Brazil and South Korea aren’t going away; the proposals are underpinned by the idea that fraud and hoaxes cannot be effectively countered only by urging people to be more careful and to think before they share. Individual acts of prevention aren’t scalable against the power of internet algorithms.

All these efforts suggest a strong desire for meaningful change, so that we don’t head toward a world that’s indifferent to reality itself — a world where parents have to wonder if that’s really their child calling for help, where clergy spend as much time debunking AI versions of themselves as they do serving their flock, or where a lonely retiree can be lured to his death by a chatbot.

The European research revealed something crucial: Their study suggests that design choices matter enormously. By requiring real identities and favoring authentic professional profiles, for example, LinkedIn created an environment where accuracy is a norm. What other lessons could we learn with more study?

The internet isn’t broken by nature. It’s broken by choices that prioritize viral engagement over accuracy, that reward anonymous bad actors, that treat disinformation as an acceptable cost of doing business. Different decisions could produce dramatically different results. These aren’t unsolvable problems. They’re choices, and we can insist on better ones.


Europe’s fact-checkers aim to stop falsehoods before they go viral

By Enock Nyariki


False claims on TikTok, YouTube Shorts and Instagram Reels often outrun fact-checkers before anyone can respond. By the time a debunk appears, the harmful narrative has already crossed borders, languages and platforms.

A coalition of European fact-checkers has built a new response. Launched in November by the European Fact-Checking Standards Network (EFCSN), "Prebunking at Scale" uses AI to spot misleading narratives while they are still forming. The goal is to give fact-checkers an early warning system to start working on research and to publish prebunks before falsehoods take hold.

The tool scans publicly available short-form video and converts speech and on-screen text into searchable claims. It groups those claims into narratives that can be tracked across platforms and languages. When a narrative begins to surge, fact-checkers receive alerts showing where it is spreading, in which languages and how quickly.

"We wanted something closer to a weather forecast than a fire alarm," said Tori Zopf, the project manager at EFCSN. "Not just what's already burning, but what's likely to hit next."

Full Fact in the U.K. and Maldita.es in Spain are building the technology together. Full Fact handles speech-to-text and claim extraction. Maldita leads the narrative clustering and risk scoring that help editors decide what to tackle first. The tool does not judge whether a claim is true or false. That decision stays with human fact-checkers.

The focus on short-form video is deliberate. Platforms like TikTok and YouTube Shorts have become major vectors for falsehoods on public interest issues, but many fact-checking organizations struggle to monitor them at scale. "It's where a lot of narratives are taking off first," Zopf said, "and where fact-checkers have had the least visibility."

The project tracks climate, conflicts, the European Union, health and migration across five European languages, with plans to reach 25 by the end of its three-year run. The network has also developed a shared prebunking methodology to guide how members respond once alerts appear.

Have ideas or suggestions for the next issue of Factually? Email us at [email protected].


Happy holidays!