This bipartisan framework centers around the following principles:
- License requirements: Companies should register with an independent oversight board to use these powerful technologies.
- Clear AI identification: AI-generated content should be clearly labeled as such (with a digital watermark, for example).
- Accountability: Existing laws should be enforced to prevent AI technologies from violating civil or privacy rights or causing other harms, and new laws should be written where existing law provides insufficient safeguards.
- Transparency: A public database should be established to give consumers and researchers access to details surrounding AI models being used.
- Protect kids and consumers: Consumers should have control over the use of their data in AI systems, and generative AI systems that may involve kids should have strict limits imposed.
- Section 230: The legal shield social media platforms have long enjoyed (known as Section 230) will not apply to AI, ensuring companies can be held legally responsible for real world harms like election interference and race or gender discrimination.
These principles represent a solid starting point. In the coming weeks, we will continue to engage experts, industry leaders, and other stakeholders to ensure new legislation properly accounts for the potential promise and perils of AI technology.
This work is urgent, friend. Hundreds of experts in the field of AI have issued a forceful warning that unchecked AI technologies could pose existential risks to humanity. Our elections are at risk. Millions of jobs are at risk. The very notion of what’s true is at risk. Humanity’s safety is at risk.
But I need your help to demand transparency into how these black boxes of AI work. With so much at stake, we must discuss and define limitations on use. Please take action by lodging your response immediately:
Should we take action to protect against potential harm from AI?
|