From Counter Extremism Project <[email protected]>
Subject Tech & Terrorism: Big Tech Companies Cut Ethics And Safety Staff Following Section 230 Ruling
Date June 6, 2023 6:00 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
In the wake of twin U.S. Supreme Court rulings that largely maintained the
liability shield in Section 230 of the Communications Decency Act, major tech
companies are sharply reducing their workforce responsible for content
moderation. Citing their commitment to “do more with less,” Twitter executives
announced that they have laid off 15 percent of the company’s trust and safety
staff, while Meta eliminated 200 content moderation positions and cut at least
100 similar posts on Instagram's integrity and responsibility staff. Meanwhile,
Google parent company Alphabet cut one-third of its staff dedicated to
identifying misinformation and radicalization on the platform.





<[link removed]>
<[link removed]>
<[link removed]>
<[link removed]>



Tech & Terrorism: Big Tech Companies Cut Ethics And Safety Staff Following
Section 230 Ruling



(New York, N.Y.) — In the wake of twin U.S. Supreme Court rulings
<[link removed]>
that largely maintained the liability shield in Section 230 of the
Communications Decency Act, major tech companies aresharply reducing
<[link removed]>
their workforce responsible for content moderation. Citing their commitment to
“do more with less,” Twitter executives announced that they havelaid off
<[link removed]>
15 percent of the company’s trust and safety staff, while Meta eliminated 200
content moderation positions and cut at least 100 similar posts on Instagram's
integrity and responsibility staff. Meanwhile, Google parent company Alphabet
cut one-third of its staff dedicated to identifying misinformation and
radicalization on the platform.



The notable downsizing of trust and safety personnel at leading tech companies
demonstrates the industry’s unwillingness to effectively curb extremist and
terrorist content, which continues to proliferate.



Recently, the Counter Extremism Project (CEP) identified numerous accounts
propagating extremist content across these platforms. In May, researcherslocated

<[link removed]>
three Twitter accounts disseminating pro-ISIS propaganda. On Meta-owned
Instagram,two Instagram pages
<[link removed]>
, collectively reaching thousands of people, promoted a "European Fight Night"
for an extreme-right German group. YouTubehosted
<[link removed]>
a video interview of Australian neo-Nazi Thomas Sewell making antisemitic
statements and advocating for white supremacy.



Unfortunately, the tech industry is actively deprioritizing online safety
given the Supreme Court’s latest rulings. These companies have a business
incentive to increase engagement on their platforms—including by pushing
terrorist content—and U.S. lawmakers must act in order to encourage better
behavior from tech.



In 2021, CEP and CEP senior advisor Dr. Hany Farid supported the introduction
of the Protecting Americans from Dangerous Algorithms Act (PADAA), which would
narrowly amend Section 230 and would lift the liability shield when an online
platform knowingly or recklessly deploys recommendation algorithms to promote
content that, among other things, is relevant to cases involving international
terrorism.



Last month, Dr. Farid observed
<[link removed]>
that “momentum is building for legislative reform,” as evidenced by the
bipartisan support
<[link removed]>
for legislation addressing children’s safety online, includingone bill
<[link removed]>
that would specifically remove “blanket immunity for violations of laws
related to online child sexual abuse material.”



Reform, however, is far from a certainty. Companies such as Alphabet, the
parent company of Google and YouTube, Meta, and Twitter, have expended nearly
$100 million
<[link removed]'s%20lobbyists%20also,civil%20actions%20for%20wrongful%20death.%E2%80%9D>
collectively on lobbying Congress since 2020, including on efforts to defeat
legislation that would reduce Section 230 immunities for online platforms.
Clearly there is an aggressive and concerted effort to maintain the status quo.
Legislators should resist tech’s efforts to obscure the ongoing spread of
extremist and terrorist content at the detriment of public safety and security.



###







Unsubscribe
<[link removed]>
|Donate <[link removed]> | Contact Us
<[link removed]>


Were you forwarded this email? Subscribe for yourself here
<[link removed]>
.
Screenshot of the email generated on import

Message Analysis

  • Sender: Counter Extremism Project
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a
  • Email Providers:
    • Iterable