From Center for Democracy & Technology <[email protected]>
Subject New Survey Research from CDT Reveals Educators’ Struggle with Generative AI Detection, Discipline, and Distrust
Date March 27, 2024 2:00 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
To view this email online, paste this link into your browser:
[link removed]





NEW FROM CDT  

([link removed])

Today, the Center for Democracy & Technology (CDT) released a new survey research report ([link removed]) about teachers’ experiences during the 2023-24 school year with generative AI policies, training on the technology, and management of student use.

([link removed])

CDT report illustration of an “AI-generated apple” with a parachute flying through an open sky, and “AI-generated” schoolwork, book, pencil & eraser falling behind. Note: this illustration was created solely by a human.

Now, since K-12 schools had the summer to craft and implement policies, teachers are having a very different experience with generative AI as compared to the 2022-23 school year. CDT’s research reveals that there have been massive changes in teacher and student usage, policy-setting, training, and teacher engagement on generative AI. For example, 80% of teachers report receiving formal training about generative AI use policies and procedures and 72% of teachers say that their school has asked them for input about policies and procedures regarding student use of generative AI.

But teachers report that generative AI’s biggest risks remain largely unaddressed – particularly risks that stem from:

Managing responsible student use — only 28% of teachers say that they have received guidance about how to respond if they suspect a student has used generative AI in ways that are not allowed, such as plagiarism;

Teachers becoming heavily reliant on ineffective AI content detection tools — 68% of teachers report using an AI content detection tool regularly, despite known efficacy issues that disproportionately affect students who are protected by civil rights laws;

Increased student disciplinary action — 64% of teachers say that student(s) at their school have gotten in trouble or experienced negative consequences for using or being accused of using generative AI on a school assignment, a 16 percentage-point increase from last school year, and;

Persistent distrust in students’ academic integrity — 52% of teachers agree that generative AI has made them more distrustful of whether their students' work is actually theirs.

This round of survey research builds off of CDT’s nationally representative survey of teachers, parents, and students ([link removed]) conducted during the last school year (2022-23), which revealed that schools were struggling to provide policies and procedures around the responsible use of generative AI – leaving teachers, parents, and students with a lack of clarity and guidance. And, even despite schools’ lack of official policies, students were experiencing disciplinary action for generative AI use.

By continuing to not address these key concerns about a technology that is not going away, schools run the risk of negatively affecting students’ educational experiences, privacy, and civil rights.

The full research report, comprehensive slide deck showing the research findings, and press release ([link removed]) are available on CDT’s website.

#CONNECT WITH CDT

DONATE ([link removed])

([link removed])

([link removed])

([link removed])

([link removed])





1401 K St NW Suite 200 | Washington, DC xxxxxx United States

This email was sent to [email protected].
To ensure that you continue receiving our emails,
please add us to your address book or safe list.

manage your preferences ([link removed])
opt out ([link removed]) using TrueRemove(r).

Got this as a forward? Sign up ([link removed]) to receive our future emails.
email powered by Emma(R)
[link removed]
Screenshot of the email generated on import

Message Analysis