Friend,
I just held another hearing in the Senate on Artificial Intelligence (AI), and I wanted to give you an update on this important issue.
AI’s own creators are stating, publicly and with real concern, that if AI is not properly regulated, it poses great risks to our society. That’s why we called on top industry executives and leading experts to help us shape legislation to put safeguards into effect.
In consultation with those leaders, we’ve now taken a first step toward commonsense AI regulations by creating a new framework for legislation.
This bipartisan framework centers around the following principles:
- License requirements: Companies should register with an independent oversight board to use these powerful technologies.
- Clear AI identification: AI-generated content should be clearly labeled as such (with a digital watermark, for example).
- Accountability: Existing laws should be enforced to prevent AI technologies from violating civil or privacy rights or causing other harms, and new laws should be written where existing law provides insufficient safeguards.
- Transparency: A public database should be established to give consumers and researchers access to details surrounding AI models being used.
- Protect kids and consumers: Consumers should have control over the use of their data in AI systems, and generative AI systems that may involve kids should have strict limits imposed.
- Section 230: The legal shield social media platforms have long enjoyed (known as Section 230) will not apply to AI, ensuring companies can be held legally responsible for real world harms like election interference and race or gender discrimination.
These principles represent a solid starting point. In the coming weeks, we will continue to engage experts, industry leaders, and other stakeholders to ensure new legislation properly accounts for the potential promise and perils of AI technology.
This work is urgent, friend. Hundreds of experts in the field of AI have issued a forceful warning that unchecked AI technologies could pose existential risks to humanity. Our elections are at risk. Millions of jobs are at risk. The very notion of what’s true is at risk. Humanity’s safety is at risk.
But I need your help to demand transparency into how these black boxes of AI work. With so much at stake, we must discuss and define limitations on use. Please take action by lodging your response immediately:
Should we take action to protect against potential harm from AI?
[YES] [NO]
[link removed]
Thank you,
Dick
--------
This email was sent to
[email protected].
To unsubscribe from this email list, please click here: [link removed]
Paid for by Blumenthal for Connecticut
Blumenthal for Connecticut
1111 Summer Street
Suite 301
Stamford, CT 06905
United States