CDT is shaping the future of responsible AI.
([link removed])
CDT is shaping the future of responsible AI.
Artificial Intelligence (AI) is rapidly becoming inseparable from our day-to-day lives, being used across sectors and by governments and companies alike. As AI evolves -- and in particular as generative AI models and products become more prominent -- questions surrounding its regulation, implementation, and development have become increasingly urgent.
In response, the Center for Democracy & Technology (CDT) has doubled down on our work to ensure AI is developed and deployed in a way that respects people's rights and safety. CDT established our AI Governance Lab in 2023 to advance best practices for companies and governments in implementing responsible AI. Our policy teams are engaging with policymakers in the U.S., EU, and other regions on effective guardrails and protections. CDT is also focused on combating misuses of AI, via initiatives like our Non-Consensual Intimate Images (NCII) Working Group and our work to address AI-driven fraud and AI's impact on elections. At a crucial time for this technology, CDT is leading conversations about how AI is tested, used, and governed - and helping developers, deployers, and people impacted by AI systems responsibly engage with the technology.
In October 2023, CDT launched our
AI Governance Lab ([link removed])
to serve as a pioneer in the ever-changing field of advanced AI, including generative AI systems. The Lab, along with its
advisory committee of experts ([link removed]), is a leading voice on policy and practices for the responsible use of AI, with a particular focus on how AI affects people's rights and daily lives. To combat potential prejudice in AI systems, the AI Governance Lab published an in-depth
report on measuring bias and discrimination in AI systems ([link removed]). The Lab has proposed solutions that would
improve empirical research conducted on generative AI ([link removed]); it has also worked towards
improving governance outcomes through AI documentation ([link removed]), synthesizing proposed AI documentation methods to distill key lessons and best practices. The team is engaging on the EU Codes of Practice for General Purpose AI Systems, is part of the NIST AI Safety Institute Consortium, and is active in multistakeholder groups such as ML Commons and the Partnership on AI.
In addition, CDT's policy teams are engaging on a range of AI issues. CDT's Equity in Civic Technology team advocated for measures that would strengthen the U.S. Office of Management & Budget's
guidance on government use of AI systems ([link removed])
and
responsible AI procurement ([link removed]), and is supporting federal and state agencies' ongoing work on responsible uses of AI. CDT has published original work and policy recommendations about uses of AI in the
workplace ([link removed]), in public
benefits ([link removed])
programs ([link removed]), in
housing ([link removed]), and in
schools ([link removed]). CDT has also emerged as a leading resource for state lawmakers considering AI legislation, and was recently named to the Commission created by Colorado Senate Bill 205 ([link removed]), the first law in the country that requires companies to assess high-risk AI tools for their potential to discriminate.
We're also working to combat AI-driven threats and abuse. In collaboration with the Cyber Civil Rights Initiative (CCRI) and National Network to End Domestic Violence (NNEDV), we launched a multistakeholder
NCII Working Group ([link removed])
to combat the creation, distribution, and resulting harms of non-consensual intimate images (NCII), including images generated by AI. We participate in multistakeholder efforts on watermarking, content provenance, and other tools to address the ways synthetic content may exacerbate fraud and deception. Our CEO also serves on the Department of Homeland Security's
AI Safety & Security Oversight Board ([link removed])
alongside leading government officials and CEOs, informing guidance on the risks AI may create for critical infrastructure.
In the 2024 election cycle, CDT's Elections & Democracy team was also hard at work. CDT developed a comprehensive set of
election integrity recommendations for AI developers ([link removed]), researched the
risk of voting misinformation ([link removed])shared by chatbots, and
detailed how social media companies’ rules for political advertising ([link removed])
have changed. We worked directly with AI companies, social media platforms, election officials, and community groups to assess the risks and help people access reliable election information they can trust.
([link removed])
About CDT ([link removed])
([link removed])
Evaluating Generative AI Systems: the Good, the Bad, and the Hype ([link removed])
([link removed])
Generating Confusion: Stress-testing AI Chatbot Responses on Voting with a Disability ([link removed])
([link removed])
CDT ([link removed])
([link removed])
In Deep Trouble - Surfacing Tech-Powered Sexual Harassment in K-12 Schools ([link removed])
([link removed])
General Purpose AI Models and the EU Code of Practice: a Process for Civil Society to Watch ([link removed])
Generative AI is a powerful tool that has taken the world by storm. Its short lifespan has already touched every aspect of our lives, including how we learn, work, and vote. As the positive and negative implications of generative AI become apparent, so does the need for research-oriented solutions promoting the responsible use of these powerful tools.
Since the earliest days of generative AI, CDT has served as a thought leader in this space and remained consistent in its efforts to ensure that responses to generative AI prioritize individual rights in decision-making. If you are not yet engaged and want to learn more, please reply to this email to join the conversation. You can help advance civil rights and civil liberties at the center of the digital age.
LEARN MORE ([link removed])
([link removed])
([link removed])
% link:[link removed] name="[link removed]" content="" %]
% link:[link removed] name="[link removed]" content="" %]
#CONNECT WITH CDT
SUPPORT OUR WORK ([link removed])
[Manage]([link removed]) your preferences | [Opt Out]([link removed]) using TrueRemove™
Got this as a forward? [Sign up]([link removed]) to receive our future emails.
View this email [online]([link removed]).
1401 K St NW Suite 200 | Washington, DC xxxxxx US
This email was sent to
[email protected].
To continue receiving our emails, add us to your address book.
([link removed])