From Counter Extremism Project <[email protected]>
Subject Tech & Terrorism: U.S. Supreme Court Will Hear Section 230 Liability Case
Date October 13, 2022 3:30 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
Earlier this month, the U.S. Supreme Court agreed to hear the case Gonzalez v.
Google, which challenges tech companies’ broad immunity to lawsuits over
content posted on their sites and platforms.





<[link removed]>
<[link removed]>



Tech & Terrorism: U.S. Supreme Court Will Hear Section 230 Liability Case



(New York, N.Y.) — Earlier this month, the U.S. Supreme Court agreed
<[link removed]>
to hear the caseGonzalez v. Google, which challenges tech companies’ broad
immunity to lawsuits over content posted on their sites and platforms. The
family of Nohemi Gonzalez, an American college student killed in a restaurant
in Paris during the November 2015 ISIS attacks, argues that the liability
shield provided by Section 230 of the Communications Decency Act do not apply
when algorithms actively promote terrorist content. Specifically, the Gonzalez
family contends that Google-owned YouTube aided in ISIS’s recruitment by
recommending ISIS videos to users—including potential supporters of the terror
group—through its algorithms.



A ruling against Google from the highest U.S. court would have wide-ranging
implications for the sweeping protection provided to tech firms under Section
230. The Counter Extremism Project (CEP) has long advocated for reform, calling
for changes to Section 230 to remove blanket immunity for harmful material,
such as terrorist content, posted by third parties. Further, the liability
shield protecting platforms that knowingly or recklessly deploy recommendation
algorithms to promote terrorist content should also be lifted.



CEP Senior Advisor and professor at the University of California, Berkeley,
Dr. Hany Farid stated, “Tech companies such as YouTube are for-profit
businesses and therefore have a strong business incentive to increase
engagement across their platform. One way they have been able to achieve
increased engagement as well as drive up revenues is through algorithmic
amplification. However, it is also this algorithmic amplification that is a
driving force for spreading extremist content online. Although it remains to be
seen how the court will rule, tech companies still need to be held liable for
increasing engagement on their platforms through promoting terrorist content,
and this case is an important milestone in that regard.”



In March 2020, Dr. Farid and other UC Berkeley researchers authored a study, A
Longitudinal Analysis Of YouTube’s Promotion Of Conspiracy Videos
<[link removed]>, that analyzed YouTube’s policies and
efforts towards curbing its recommendation algorithm’s tendency to spread
divisive conspiracy theories. After reviewing eight million recommendations
over 15 months, researchers determined the progress YouTube claimed in June
2019 to have reduced the amount of time its users watched recommended videos
including conspiracies by 50 percent—and in December 2019 by 70 percent—did not
make the “problem of radicalization on YouTube obsolete nor fictional.” The
study ultimately found that a more complete analysis of YouTube’s algorithmic
recommendations showed the proportion of conspiratorial recommendations are
“now only 40 percent less common than when YouTube’s measures were first
announced.”



###





Unsubscribe
<[link removed]>
Screenshot of the email generated on import

Message Analysis

  • Sender: Counter Extremism Project
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a
  • Email Providers:
    • Iterable