From Counter Extremism Project <[email protected]>
Subject Tech & Terrorism: U.S. Senate Report Finds Big Tech Continues To Amplify Extremist Content
Date December 8, 2022 5:35 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
After a three-year investigation into the rise of domestic terrorism and the
federal government response, the U.S. Senate Committee on Homeland Security and
Government Affairs recently published its report, The Rising Threat of Domestic
Extremism: A Review of the Federal Response to Domestic Terrorism and the
Spread of Extremist Content on Social Media.





<[link removed]>
<[link removed]>



Tech & Terrorism: U.S. Senate Report Finds Big Tech Continues To Amplify
Extremist Content



(New York, N.Y.) — After a three-year investigation into the rise of domestic
terrorism and the federal government response, the U.S. Senate Committee on
Homeland Security and Government Affairs recently published itsreport
<[link removed]>
,The Rising Threat of Domestic Extremism: A Review of the Federal Response to
Domestic Terrorism and the Spread of Extremist Content on Social Media. The
Committee concluded that tech companies’ business models are “designed to
increase user engagement and that…more extreme content tends to increase user
engagement, thus leading such content to be amplified.” The finding echoes
Counter Extremism Project (CEP) Senior Advisor Dr. Hany Farid’sexpert testimony
<[link removed]>
to Congress that “outrageous, divisive, and conspiratorial content increases
engagement” and that “the vast majority of delivered content is actively
promoted by content providers based on their algorithms that are designed in
large part to maximize engagement and revenue.”



The Committee reports that while tech companies have professed its interests
in “prioritiz[ing] trust and safety” and “noted that they have invested heavily
in content moderation,” their efforts have fallen short in “mitigat[ing] the
proliferation of extremist content that their own recommendation algorithms,
products, and features and spreading.”



In March 2020, Dr. Farid and other UC Berkeley researchers authored a study, A
Longitudinal Analysis Of YouTube’s Promotion Of Conspiracy Videos
<[link removed]>, that analyzed YouTube’s policies and
efforts towards curbing its recommendation algorithm’s tendency to spread
divisive conspiracy theories. After reviewing eight million recommendations
over 15 months, researchers determined the progress YouTubeclaimed
<[link removed]>
in June 2019 to have reduced the amount of time its users watched recommended
videos including conspiracies by 50 percent—and inDecember 2019
<[link removed]>
by 70 percent—did not make the “problem of radicalization on YouTube obsolete
nor fictional.” The study ultimately found that a more complete analysis of
YouTube’s algorithmic recommendations showed the proportion of conspiratorial
recommendations are “now only 40 percent less common than when YouTube’s
measures were first announced.”



In its next session, Congress should respond to these findings by lifting
liability immunity enshrined in Section 230 of the Communications Decency Act
for platforms knowingly or recklessly using recommendation algorithms to
promote terrorist content. Congress should also lift blanket immunity for
terrorist content posted by third parties. While content moderation remains at
the forefront of many policy conversations, the material produced by or in
support of designated terrorist groups and individuals must be removed
unassailably because it continues to inspire further violence.



To watch a recording of the CEP web event, Algorithmic Amplification of
Divisive Content on Tech Platforms, please click here
<[link removed]>
.


###





Unsubscribe
<[link removed]->
Screenshot of the email generated on import

Message Analysis

  • Sender: Counter Extremism Project
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a
  • Email Providers:
    • Iterable