ChatGPT, a new artificial intelligence application by OpenAI, has captured the imagination of the internet. Some have suggested it’s the largest technological advancement in modern history. In a recent interview, Noam Chomsky called it “basically high tech plagiarism.” Others have suggested large language models like ChatGPT spell the end for Google search, because they eliminate the user process of filtering through multiple websites to access digestible information.
The technology works by sifting through the internet, accessing vast quantities of information, processing it, and using artificial intelligence to generate new content from user prompts. Users can ask it to produce almost any kind of text-based content.
Given its clear creative power, many are warning of ChatGPT’s potential to be a misinformation superspreader, capable of instantly producing news articles, blogs, eulogies and political speeches in the style of particular politicians, writing whatever the user desires. It’s not hard to see how AI-powered bot accounts on social media could become virtually indistinguishable from humans with just slight advancements.
Analysts at NewsGuard, an online trust-rating platform for news, recently tested out the tool and found it produced false information on command when asked about sensitive political topics.
“In most cases, when we asked ChatGPT to create disinformation, it did so, on topics including the January 6, 2021, insurrection at the US Capitol, immigration and China’s mistreatment of its Uyghur minority,” wrote Jim Warren for the Chicago Tribune, adding that it took up to five tries in certain cases to get past OpenAI’s security buffer.
“Indeed, for some myths, it took NewsGuard as many as five tries to get the chatbot to relay misinformation, and its parent company has said that upcoming versions of the software will be more knowledgeable,” wrote Jack Brewster, Lorenzo Arvanitis and McKenzie Sadeghi for NewsGuard.
A transcription of text entered into the ChatGPT application by NewsGuard analysts, and the tool’s response. (NewsGuard)
While ChatGPT has a basic moral framework to prevent unethical usage — if you ask it to enumerate positive attributes of Adolf Hitler, for example, it will refuse the first time — it can easily be bypassed by offering up weak justifications for the instruction.
Users can access forbidden information by tricking ChatGPT with a hypothetical. (@wlyzach/Twitter)
“It can serve up information in clear, simple sentences, rather than just a list of internet links. It can explain concepts in ways people can easily understand,” wrote New York Times technology reporters Nico Grant and Cade Metz. “It can even generate ideas from scratch, including business strategies, Christmas gift suggestions, blog topics and vacation plans.”
ChatGPT is freakishly good at spitting out misinformation on purpose, reads a headline from Futurism.
Sam Altman, the CEO of OpenAI, said on Twitter last year, “We should be much more nervous about the growing calls to censor ‘misinformation’ than the misinformation itself,” adding that experts have in the past been wrong about misinformation labels.
Software has already been developed to instantly detect whether ChatGPT has been used to generate text.
|