From Counter Extremism Project <[email protected]>
Subject Tech & Terrorism: U.S. Senate Subcommittee Examines Role Algorithmic Amplification Plays In Driving Extremist Content
Date April 27, 2021 8:20 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
The U.S. Senate Judiciary Subcommittee on Privacy, Technology, and the Law held
a hearing today on algorithmic amplification and its role in spreading


<[link removed]>
<[link removed]>
Tech & Terrorism: U.S. Senate Subcommittee Examines Role Algorithmic
Amplification Plays In Driving Extremist Content

 

(New York, N.Y.) – The U.S. Senate Judiciary Subcommittee on Privacy,
Technology, and the Lawheld a hearing
<[link removed]>
today on algorithmic amplification and its role in spreading extremist content
on social media platforms. Among those testifying were Facebook’s Vice
President for Content Policy Monika Bickert, Twitter’s Head of U.S. Public
Policy Lauren Culbertson, and YouTube’s Director of Government Affairs and
Public Policy for the Americas and Emerging Markets Alexandra Veitch. During
the hearing, company officials collectively minimized the impact their
algorithms have on spreading extremist content and fell back on a familiar
refrain of protecting free speech, going so far as to even claim that it’s not
in their business interest to spread such content.

 

“Tech companies have a very simple business model: the more engagement their
algorithms drive, the more money they make. Extremist content, terrorist
content, or any other type of divisive content are the best generators for
this,” said Counter Extremism Project (CEP) Senior Advisor Dr. Hany Farid, a
professor at the University of California, Berkeley. “At the end of the day,
it’s not a question of how much or how little a company is doing to stop their
algorithms from spreading bad content. It’s a question of whether they are
willing to. What we have seen up until now, and will continue to see, suggests
that so long as their business model directly contradicts this, we will not see
any substantive changes.”

 

In one example from today’s hearing, YouTube touted changes it made in January
2019 that it claimed limited the spread of extremist content. However, in March
2020, Dr. Farid and other researchers at the University of California, Berkeley
released a report titled,A Longitudinal Analysis Of YouTube’s Promotion Of
Conspiracy Videos
<[link removed]>, that showed
the proportion of conspiratorial recommendations is “now only 40 percent less
common than when the YouTube’s measures were first announced.” After reviewing
eight million recommendations over 15 months, researchers determined the
progress YouTube then claimed in June 2019 to have reduced the amount of time
its users watched recommended videos by 50 percent—and in December 2019 by 70
percent—did not make the “problem of radicalization on YouTube obsolete nor
fictional.”

 

Relevant excerpts from the hearing may be found below:

 

Monika Bickert, Facebook
<[link removed]>: “The
reality is that it’s not in Facebook’s interest—financially or
reputationally—to push users towards increasingly extreme content. The
company’s long-term growth will be best served if people continue to use and
value its products for years to come. If we prioritized trying to keep a person
online for a few extra minutes, but in doing so made that person unhappy or
angry and less likely to return in the future, it would be self-defeating.
Furthermore, the vast majority of Facebook’s revenue comes from advertising.
Advertisers don’t want their brands and products displayed next to extreme or
hateful content—they’ve always been very clear about that. Even though
troubling content is a very small proportion of the total content people see on
our services (hate speech is viewed 7 or 8 times for every 10,000 views of
content on Facebook), Facebook’s long-term financial self-interest is to
continue to reduce it so that advertisers and users have a good experience and
continue to use our services.”

 

Lauren Culbertson, Twitter
<[link removed]>: “Many
of the questions we grapple with today are not new, but the rise and evolution
of the online world have magnified the scale and scope of these challenges. As
a global company that values free expression, we find ourselves navigating
these issues amidst increasing threats to free speech from governments around
the world. We strive to give people a voice while respecting applicable law and
staying true to our core principles.”

 

Alexandra Veitch, YouTube
<[link removed]>: “In
January 2019, we announced changes to our recommendations systems to limit the
spread of this type of content. These changes resulted in a 70 percent drop in
watch time on non-subscribed recommended content in the U.S. that year. We saw
a drop in watch time of borderline content coming from recommendations in other
markets as well. While algorithmic changes take time to ramp up and consumption
of borderline content can go up and down, our goal is to have views of
non-subscribed, recommended borderline content below 0.5 percent. We seek to
drive this number to zero, but no system is perfect; in fact, measures intended
to take this number lower can have unintended, negative consequences, leading
legitimate speech to not be recommended. As such, our goal is to stay below the
0.5 percent threshold, and we strive to continually improve over time.”

 

###

Unsubscribe
<[link removed]>
Screenshot of the email generated on import

Message Analysis

  • Sender: Counter Extremism Project
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a
  • Email Providers:
    • Iterable