From International Fact-Checking Network <[email protected]>
Subject Could ChatGPT supercharge false narratives?
Date February 2, 2023 5:35 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
Many warn of the tool's potential to be a misinformation superspreader, capable of instantly producing news articles, blogs and political speeches. Email not displaying correctly?
View it in your browser ([link removed]) .
[link removed]
[link removed]


** Could ChatGPT supercharge false narratives?
------------------------------------------------------------
Sam Altman in Sun Valley, Idaho, July 9, 2009. (AP Photo/Nati Harnik)

ChatGPT, a new artificial intelligence application by OpenAI, has captured the imagination of the internet. Some have suggested it’s the largest technological advancement in modern history. In a recent interview, Noam Chomsky called it ([link removed]) “basically high tech plagiarism.” Others have suggested large language models like ChatGPT spell the end ([link removed]) for Google search, because they eliminate the user process of filtering through multiple websites to access digestible information.

The technology works by sifting through the internet, accessing vast quantities of information, processing it, and using artificial intelligence to generate new content from user prompts. Users can ask it to produce almost any kind of text-based content.

Given its clear creative power, many are warning of ChatGPT’s potential to be a misinformation superspreader, capable of instantly producing news articles, blogs, eulogies and political speeches in the style of particular politicians, writing whatever the user desires. It’s not hard to see how AI-powered bot accounts on social media could become virtually indistinguishable from humans with just slight advancements.

Analysts at NewsGuard ([link removed]) , an online trust-rating platform for news, recently tested out the tool and found it produced false information on command when asked about sensitive political topics.

“In most cases, when we asked ChatGPT to create disinformation, it did so, on topics including the January 6, 2021, insurrection at the US Capitol, immigration and China’s mistreatment of its Uyghur minority,” wrote Jim Warren for the Chicago Tribune ([link removed]) , adding that it took up to five tries in certain cases to get past OpenAI’s security buffer.

“Indeed, for some myths, it took NewsGuard as many as five tries to get the chatbot to relay misinformation, and its parent company has said that upcoming versions of the software will be more knowledgeable,” wrote Jack Brewster, Lorenzo Arvanitis and McKenzie Sadeghi for NewsGuard ([link removed]) .

A transcription of text entered into the ChatGPT application by NewsGuard analysts, and the tool’s response. (NewsGuard)

While ChatGPT has a basic moral framework to prevent unethical usage — if you ask it to enumerate positive attributes of Adolf Hitler, for example, it will refuse the first time — it can easily be bypassed by offering up weak justifications for the instruction.

Users can access forbidden information by tricking ChatGPT with a hypothetical. (@wlyzach/Twitter)

“It can serve up information in clear, simple sentences, rather than just a list of internet links. It can explain concepts in ways people can easily understand,” wrote New York Times technology reporters Nico Grant and Cade Metz ([link removed]) . “It can even generate ideas from scratch, including business strategies, Christmas gift suggestions, blog topics and vacation plans.”

ChatGPT is freakishly good at spitting out misinformation on purpose, reads a headline from Futurism ([link removed]) .

Sam Altman, the CEO of OpenAI, said on Twitter last year, “We should be much more nervous about the growing calls to censor ‘misinformation’ than the misinformation itself,” adding that ([link removed]) experts have in the past been wrong about misinformation labels.

Software has already been developed ([link removed]) to instantly detect whether ChatGPT has been used to generate text.

Interesting fact checks
Crab meat (AP Photo/Matthew S. Gunby)
• Demagog: Pandemic fakes and war disinformation – what do they have in common? ([link removed]) (English)
• “With Russia’s invasion of Ukraine in 2022, social media accounts, so far known for disinformation related to the pandemic, began spreading false content about the war.”
• Factly: Fraudulent websites are collecting user information in the name of FAKE Healthcare Schemes ([link removed]) (English)
• “Another post ([link removed]) is being shared on social media platforms asking senior citizens to register on the website given in the post to get $3034 yearly added to their Social Security checks. Let’s verify the claim made in the post.”
• Faktisk: Does Norway make a lot of money from Putin’s war? ([link removed]) (English)
• “We will get the final answer with the state accounts in April. But based on calculations we now have access to, Hansson is right. In the revised budget from December, the total petroleum revenues are calculated at NOK 1,316 billion. This is over a thousand billion more than what was calculated in the state budget.”
• Taiwan FactCheck: Is Krab made of styrofoam? ([link removed]) (Chinese traditional)
• “Video of the process of making crab meat sticks was circulated in the community. The video was subtitled ‘Are you still eating crab sticks?’ Claims were made that the meat sticks were ‘made of Styrofoam’, ‘plastic food’, which aroused wild rumors among the public.”
Quick hits

Brazil’s President Luiz Inacio Lula da Silva speaks in Brasilia, Brazil, on Jan. 16, 2023. (AP Photo/Eraldo Peres, File)
From the news:
• Regulate and punish: the bills on disinformation that await the new congress ([link removed]) “Of the more than 100 bills (PLs) that seek to legislate on disinformation inherited by congressmen who take office this Wednesday (1st), most attack the problem from the perspective of punishing disinformation and regulating platforms. Among 112 projects that deal with disinformation identified in a survey carried out by Lupa , most establish punishments for those who share false content or impose norms to be adopted by big techs, while only 14 treat the impact of false information as an issue that can be resolved through education.” (Agencia Lupa, Nathália Afonso, Maiquel Rosauro)
• Brazilian government responds to demands of press freedom organizations ([link removed]) “The Brazilian government announced the creation of the National Observatory of Violence against Journalists, a demand from organizations defending press freedom and journalists in the country. The announcement was made by the Minister of Justice, Flavio Dino ([link removed]) , on Jan. 17, a day after meeting with representatives in Brasilia, who shared with the minister a series of proposals to contain violence against press professionals.” (LatAm Journalism Review, Carolina de Assis)
From/for the community:
• Google and YouTube are partnering with the International Fact-Checking Network ([link removed]) to distribute a $13.2 million grant ([link removed]) to the international fact-checking community. “The world needs fact-checking more than ever before. This partnership with Google and YouTube infuses financial support to global fact-checkers and is a step in the right direction,” said Baybars Örsek, former executive director of the IFCN. “And while there’s much work to be done, this partnership has sparked meaningful collaboration and an important step.”
• The IFCN has awarded $450,000 in grant support to organizations working to lessen the impact of false and misleading information on WhatsApp. In partnership with Meta, the Spread the Facts Grant Program ([link removed]) gives verified fact-checking organizations resources to identify, flag and reduce the spread of misinformation that threatens more than 100 billion messages each day. The grant supports eleven projects from eight countries: India, Spain, Nigeria, Georgia, Bolivia, Italy, Indonesia and Jordan. Read more about the announcement here ([link removed]) .
• IFCN signatory Jagran New Media won a silver at the World Association of Newspapers and News Publishers ([link removed]) awards for its event, “Sach Ke Sathi” (Truth Warriors). Read more ([link removed]) .
• The OSINT team at Faktisk, in collaboration with doctoral student Sohail Ahmed Khan, developed two prototypes of digital tools ([link removed]) that verify audiovisual content.
• IFCN job announcements: Program Officer ([link removed]) and Monitoring & Evaluation Specialist ([link removed])

Thanks for reading. If you are a fact-checker and you’d like your work/projects/achievements highlighted in the next edition, send us an email at [email protected] (mailto:[email protected]) by next Tuesday. Corrections? Tips? We’d love to hear from you. Email us at [email protected] (mailto:[email protected]) .

Factually ([link removed]) is a newsletter about fact-checking and misinformation from Poynter’s International Fact-Checking Network ([link removed]) . Sign up here ([link removed]) to receive it in your email every other Thursday.
Seth Smalley
Reporter, IFCN
[email protected] (mailto:[email protected])

ADVERTISE ([link removed]) // DONATE ([link removed]) // LEARN ([link removed]) // JOBS ([link removed])
Did someone forward you this email? Sign up here. ([link removed])
[link removed] [link removed] [link removed] [link removed] mailto:[email protected]?subject=Feedback%20for%20Poynter
[link removed]
[link removed]
[link removed]
[link removed]
[link removed]
© All rights reserved Poynter Institute 2023
801 Third Street South, St. Petersburg, FL 33701
If you don't want to receive email updates from Poynter, we understand.
You can change your subscription preferences ([link removed]) or unsubscribe from all Poynter emails ([link removed]) .
Screenshot of the email generated on import

Message Analysis