From Counter Extremism Project <[email protected]>
Subject Tech & Terrorism: Tech’s Activist Role In Content Moderation Undermines Industry Arguments For Section 230 Protections
Date May 21, 2020 3:01 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
Technology giants like Facebook, Twitter, and Google/YouTube have long
maintained that they are merely neutral platforms and cannot be held responsibl


<[link removed]>
<[link removed]>
Tech & Terrorism: Tech’s Activist Role In Content Moderation Undermines
Industry Arguments For Section 230 Protections

(New York, N.Y.) – Technology giants like Facebook, Twitter, and
Google/YouTube have long maintained that they are merely neutral platforms and
cannot be held responsible for what users choose to post and share on their
sites. Section 230 of the Communications Decency Act (CDA) has provided blanket
liability protection to encourage tech firms to proactively remove hateful,
abusive, violent, and other unwanted content from their sites. Instead of
availing themselves of the freedom to properly enforce their Terms of Service
under the protection afforded by Section 230, tech companies have done the bare
minimum to rein in dangerous content, while using the liability shield to fend
offlawsuits
<[link removed]>
from victims of terrorism.

An analysis of tech companies’ behavior reveals that they are not simply
neutral platforms. In fact, these companies play a much more active role with
regards to content—collecting and selling, promoting, and even producing their
own content.

Big tech likes to point out that their platforms are free of charge—but there
is still a cost to the public. In exchange for using these ostensibly free
platforms, users post and share content as well as provide personal data to
these firms. These companies then use this information for profit. For
instance, Twitterpackages
<[link removed]> streams of public
posts and shares them with business partners around the world. Facebook insists
that it does not sell user data, but argues instead that it simplysells access
<[link removed]>
to its users from which others can harvest information—a distinction without a
difference. The simple truth is that users’ content and personal information is
the driver of Facebook’s profitability.

Dr. Hany Farid, professor of electrical engineering and computer science at UC
Berkeley and senior advisor to the Counter Extremism Project (CEP), explained
in a May 14webinar
<[link removed]> that tech
companies algorithmically amplify certain content over others. This practice by
which tech companies use algorithms to micro-target content to specific
audiences puts into doubt the tech industry’s claims that they are a neutral
platform protected under Section 230. Dr. Farid referenced a study he
co-authored in March on YouTube’spromotion of conspiracy videos
<[link removed]> on its
platform, which noted that approximately 70 percent of watched content on
YouTube is recommended by its algorithm.He explains
<[link removed]>, “So
[those algorithm-based videos are what’s in the] ‘Watch Next’ or ‘Recommended
For You’ down the right-hand column. So all of the action is in these
recommendation engines. And so when YouTube says—hey guys watch this
misinformation, watch this conspiracy, watch this hate video—they are the ones
who are promoting this material. They're not just hosting it.”

The real-world effects of their “activist role
<[link removed]>
” was most illustrative in May 2019, when awhistleblower complaint
<[link removed]> to the Securities and
Exchange Commission alleged that Facebook’s auto-generation algorithm in fact
created a “branded landing space” for extremist groups such as al-Qaeda, ISIS,
and al-Shabab. The CEP reportSpiders of the Caliphate
<[link removed]>
also found that ISIS followers exploit Facebook’s algorithms, whether it be
the “suggested friends” feature or the auto-generation of videos and pages.

Moreover, tech firms are also actively pursuing content producing endeavors
despite protests to the contrary. In 2016, CEO Mark Zuckerberginsisted
<[link removed]>
that Facebook is “a tech company, not a media company.” But by the following
year, Facebook was reported to be willing to spend up to$1 billion on original
content
<[link removed]>
and looking to revamp its Facebook Watch tab. The so-called tech company was
meeting with publishers and studio producers todevelop new shows
<[link removed]>
for Watch by 2019.

Tech’s actions make it very clear that these companies have been and will
continue to be actively involved in the collection, manipulation, and creation
of content—behaving as a publisher.

As U.S. Attorney General William P. Barr pointed out
<[link removed]>
, Section 230 was intended to help shield tech companies from liabilityif they
opted to moderate content. The tech industry unfortunately wants to continue to
have it both ways. Big tech wants to be able to monetize, control, manipulate,
and create new content. But somehow, they want to be treated under the law not
as hugely powerful and demonstrably engaged corporations, but as a collection
of blank slates. Indeed, tech has spentrecord amounts
<[link removed]>
andlobbied extensively
<[link removed]>
to maintain the coveted shield that is Section 230.

Yet, despite the industry’s massive lobbying, there is growing bipartisan
support
<[link removed]>
in Congress to amend the law. Given its proven unwillingness to act
effectively in the interest of public safety, the tech industry no longer
deserves the ability to hide behind such a broad shield.

 ###

Unsubscribe
<[link removed]>
Screenshot of the email generated on import

Message Analysis

  • Sender: Counter Extremism Project
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a
  • Email Providers:
    • Iterable