CDT is shaping the future of responsible AI.
|
Artificial Intelligence (AI) is rapidly becoming inseparable from our day-to-day lives, being used across sectors and by governments and companies alike. As AI evolves — and in particular as generative AI models and products become more prominent — questions surrounding its regulation, implementation, and development have become increasingly urgent.
In response, the Center for Democracy & Technology (CDT) has doubled down on our work to ensure AI is developed and deployed in a way that respects people's rights and safety. CDT established our AI Governance Lab in 2023 to advance best practices for companies and governments in implementing responsible AI. Our policy teams are engaging with policymakers in the U.S., EU, and other regions on effective guardrails and protections. CDT is also focused on combating misuses of AI, via initiatives like our Non-Consensual Intimate Images (NCII) Working Group and our work to address AI-driven fraud and AI’s impact on elections. At a crucial time for this technology, CDT is leading conversations about how AI is tested, used, and governed – and helping developers, deployers, and people impacted by AI systems responsibly engage with the technology.
|
|
|
In October 2023, CDT launched our AI Governance Lab to serve as a pioneer in the ever-changing field of advanced AI, including generative AI systems. The Lab, along with its advisory committee of experts, is a leading voice on policy and practices for the responsible use of AI, with a particular focus on how AI affects people’s rights and daily lives. To combat potential prejudice in AI systems, the AI Governance Lab published an in-depth report on measuring bias and discrimination in AI systems. The Lab has proposed solutions that would improve empirical research conducted on generative AI; it has also worked towards improving governance outcomes through AI documentation, synthesizing proposed AI documentation methods to distill key lessons and best practices. The team is engaging on the EU Codes of Practice for General Purpose AI Systems, is part of the NIST AI Safety Institute Consortium, and is active in multistakeholder groups such as ML Commons and the Partnership on AI.
In addition, CDT’s policy teams are engaging on a range of AI issues. CDT’s Equity in Civic Technology team advocated for measures that would strengthen the U.S. Office of Management & Budget’s guidance on government use of AI systems and responsible AI procurement, and is supporting federal and state agencies’ ongoing work on responsible uses of AI. CDT has published original work and policy recommendations about uses of AI in the workplace, in public benefits programs, in housing, and in schools. CDT has also emerged as a leading resource for state lawmakers considering AI legislation, and was recently named to the Commission created by Colorado Senate Bill 205, the first law in the country that requires companies to assess high-risk AI tools for their potential to discriminate.
We’re also working to combat AI-driven threats and abuse. In collaboration with the Cyber Civil Rights Initiative (CCRI) and National Network to End Domestic Violence (NNEDV), we launched a multistakeholder NCII Working Group to combat the creation, distribution, and resulting harms of non-consensual intimate images (NCII), including images generated by AI. We participate in multistakeholder efforts on watermarking, content provenance, and other tools to address the ways synthetic content may exacerbate fraud and deception. Our CEO also serves on the Department of Homeland Security’s AI Safety & Security Oversight Board alongside leading government officials and CEOs, informing guidance on the risks AI may create for critical infrastructure.
In the 2024 election cycle, CDT’s Elections & Democracy team was also hard at work. CDT developed a comprehensive set of election integrity recommendations for AI developers, researched the risk of voting misinformation shared by chatbots, and detailed how social media companies’ rules for political advertising have changed. We worked directly with AI companies, social media platforms, election officials, and community groups to assess the risks and help people access reliable election information they can trust.
|
|
|
|
Generative AI is a powerful tool that has taken the world by storm. Its short lifespan has already touched every aspect of our lives, including how we learn, work, and vote. As the positive and negative implications of generative AI become apparent, so does the need for research-oriented solutions promoting the responsible use of these powerful tools.
Since the earliest days of generative AI, CDT has served as a thought leader in this space and remained consistent in its efforts to ensure that responses to generative AI prioritize individual rights in decision-making. If you are not yet engaged and want to learn more, please reply to this email to join the conversation. You can help advance civil rights and civil liberties at the center of the digital age.
|
|
|
Manage your preferences | Opt Out using TrueRemove™
Got this as a forward? Sign up to receive our future emails.
View this email online.
|
1401 K St NW Suite 200 | Washington, DC xxxxxx US
|
|
|
This email was sent to [email protected].
To continue receiving our emails, add us to your address book.
|
| |
|
|