The newest iteration of OpenAI’s generative chatbot, GPT-4, is heightening existential fears over misinformation, disinformation and job security. Its reasoning abilities are purportedly better than ChatGPT, with OpenAI pointing to the newest version’s superior scores on standardized tests like the LSAT, bar exam and GRE as evidence. GPT-4 can understand the semantic content of images and can accurately discern why a meme is funny. It is capable of translating hand-drawn pictures of web pages into fully functional ones, instantly.Given only the image, GPT-4 explains why a global map made of chicken nuggets is funny.
(Screenshot/GPT-4/@inthesun_x)
Prominent figures in Silicon Valley — including Elon Musk, Apple co-founder Steve Wozniak and former presidential candidate Andrew Yang — just signed a petition to halt developments on artificial intelligence stronger than GPT-4. The petition has over 30,000 signatories and cites large-scale societal risks as its founding purpose.
“Systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” the letter reads.
In an interview with ABC News, OpenAI founder Sam Altman expressed his fears over AI’s potential for spreading disinformation.
“I’m particularly worried that these models could be used for large-scale disinformation,” Altman said in a March interview with ABC News’ Rebecca Jarvis. “Now that they’re getting better at writing computer code, (they) could be used for offensive cyberattacks.”
But Altman dispelled fears of AI world domination, saying “this is a tool that is very much in human control.”
Researchers and fact-checkers working in artificial intelligence have warned of spurious statements surrounding the capabilities of chatbots.
“Claims that generative AI tools will produce accurate content should be treated with great caution. There is nothing inherent in the technology that provides any assurance of accuracy,” said Kate Wilkinson, senior product manager at Full Fact, a London-based fact-checking organization that implements AI. “ At this stage it is too early to be able to say exactly how this will evolve, but it is clearly a significant change in the general accessibility of such a powerful set of tools to such a wide group of users.”
Others echoed Altman’s worries over mass dissemination of disinformation.
“A cunning manipulator could use it to quickly generate a lot of information that sounds trustworthy, but is, in fact, misleading or utterly false,” said Marcel Kiełtyka, a spokesperson for Demagog, a nongovernmental and fact-checking organization based in Poland. “Fact-checkers should keep an eye on developments of such technologies because the automation of content-creation poses a risk of flooding social media with misinformation.”
|