From Christina Swarns <[email protected]>
Subject What happens when AI gets it wrong?
Date September 19, 2023 9:04 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
John,

Artificial intelligence (AI) technology has rapidly become a part of our everyday lives. Whether it is the filters on social media, the autocorrect feature on our smartphones, or customer service chatbots, AI is all around us — even within our criminal legal system.

Police now use investigative and surveillance technologies, such as facial recognition systems, to try to identify people who committed crimes. But time and again, these technologies get it wrong, as seen in Porcha Woodruff’s case. Porcha, a 32-year-old aesthetician and nursing student, was eight months pregnant and getting her two daughters ready for school when police showed up at her house and arrested her for carjacking — a crime she didn’t commit. The Detroit police used facial recognition technology to run an image of the suspect through a mugshot database, and Porcha’s photo was among those returned. She was questioned over the course of 11 hours at the Detroit Detention Center.

A month later, the prosecutor dismissed the case against her based on insufficient evidence. But Porcha’s ordeal demonstrates the very real risk that cutting-edge artificial intelligence-based technology presents to innocent people, especially when such technology is neither rigorously tested nor regulated before it is put into use..

Please, take a moment right now to read my thoughts on the dangers that unregulated and untested AI poses to innocent people. [[link removed]]

Although its accuracy has improved over recent years, AI technology still relies heavily on vast quantities of information that it is incapable of assessing for reliability. In many cases, that information is biased.

Facial recognition software, for instance, is significantly less reliable for Black and Asian people, who, according to a study by the National Institute of Standards and Technology, were 10 to 100 times more likely to be misidentified than white people. The institute, along with other independent studies, found that the systems’ algorithms struggled to distinguish between facial structures and darker skin tones. To date, six people that we know of have reported being falsely accused of a crime based on a facial recognition match — all six were Black.

The use of such biased technology has real-world consequences for innocent people throughout the country.

The truth is, absent comprehensive testing or oversight, the introduction of additional AI-driven technology will only increase the risk of wrongful conviction and may displace effective policing strategies, such as community engagement and relationship-building, that we know can reduce wrongful arrests.

Here at the Innocence Project, we are committed to countering the harmful effects of emerging technologies, advocating for research on AI’s reliability and validity, and urging consideration of the ethical, legal, social and racial justice implications of its use.

If you want to learn more about the harmful impact AI technology can have on our criminal legal system and the steps we’re taking to combat it, read my latest piece on the Innocence Project website right now. [[link removed]]

With gratitude,

Christina Swarns
Executive Director
Innocence Project



SHOP: [[link removed]]
DONATE: [[link removed]]

[[link removed]]
[[link removed]]
[[link removed]]
[[link removed]]

The Innocence Project works to free the innocent, prevent wrongful convictions, and create fair, compassionate, and equitable systems of justice for everyone. Founded in 1992 by Barry C. Scheck and Peter J. Neufeld at the Benjamin N. Cardozo School of Law at Yeshiva University, the organization is now an independent nonprofit. Our work is guided by science and grounded in anti-racism.

[link removed]

Copyright © 2023 Innocence Project, All rights reserved.
212.364.5340
[email protected]

unsubscribe from all emails [link removed]
update subscription preferences [link removed]
privacy policy [[link removed]]
disclosures [[link removed]]
Screenshot of the email generated on import

Message Analysis