There’s an old saying that “seeing is believing,” but with the rise of artificial intelligence, that’s not really true anymore.
We’re seeing fake images, doctored videos, and manipulated audio. You might have seen some of the most viral examples, like the artificial image of an explosion near the Pentagon or the deepfake images of Donald Trump being arrested.
Deception. Fraud. Misinformation. These aren’t new problems. But AI brings a whole new scale to them.
We can’t wait to see what happens next with AI. We need to proactively address the dangers it poses. That’s why I introduced a bipartisan bill to promote transparency in AI and prevent fraud and the spread of misinformation.
My bill would require developers to clearly label AI-generated content and chatbots while it also prevents the publication of unlabeled AI-generated content. It would establish technical standards for social media platforms to identify such content. It puts the onus on companies, not people -- because no one should have to do a deep dive to figure out if what they’re looking at is real or not.
There’s great potential for AI to improve lives -- it may help diagnose diseases early and make society more productive -- but, there’s still so much we don’t know about AI, and AI labeling is a commonsense place to start. It’s a straightforward solution to a complex problem.
I want to build support for my bill, and I’m hoping I can count on yours. Add your name if you agree we need to start taking steps now to address AI-generated misinformation and fraud.
Mahalo,
Brian
|