Last month, OpenAI announced that it had disrupted a covert Iranian influence campaign that was using its software, ChatGPT, to spread disinformation online. This came less than two months after the U.S. Justice Department took down a large network of AI-created bots that were spreading Russian propaganda.
Iran and Russia are not alone. China, too, is exploring AI-enhanced social media manipulation.
A recent RAND paper examines how Beijing's use of powerful generative AI may pose a potential national security threat. China is a particularly useful case study, the authors say, because its disinformation efforts seem to be growing bolder and more sophisticated.
Mitigating this threat won’t be easy, but there are some actions that could help. For example, social media platforms could redouble their efforts to identify, attribute, and remove fake accounts. Media companies and other legitimate content creators could develop digital watermarks or other ways to show that their pictures and videos are authentic. And federal regulators could consider requiring social media companies to verify users' identities, much like banks do.
But these steps will require time and trade-offs. That’s why skeptical social media users may still be the best defense against AI-generated disinformation. “We have to assume that AI manipulation is ubiquitous,” said RAND’s Nathan Beauchamp-Mustafaga. “It's proliferating, and we're going to have to learn to live with it.”
|