From Counter Extremism Project <[email protected]>
Subject Tech & Terrorism: Google Makes Vague Promises To Develop Content Moderation Software For Smaller Platforms
Date January 11, 2023 5:15 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
Google’s Jigsaw announced that it is developing a free software tool to help
smaller websites improve their content moderation capabilities. The purported
new tool comes as tech companies will be forced to remove terrorist content
from their sites or face fines under the EU’s Digital Services Act. A similar
rule will apply in the U.K. under the Online Safety Bill, which is expected to
become law this year.





<[link removed]>
<[link removed]>



Tech & Terrorism: Google Makes Vague Promises To Develop Content Moderation
Software For Smaller Platforms



(New York, N.Y.) — Google’s Jigsaw announced
<[link removed]> that it is
developing a free software tool to help smaller websites improve their content
moderation capabilities. The purported new tool comes as tech companies will be
forced to remove terrorist content from their sites or face fines under the
EU’s Digital Services Act. A similar rule will apply in the U.K. under the
Online Safety Bill, which is expected to become law this year.



Unfortunately, Google’s announcement remains vague, and the tool is simply
described as helping human moderators make decisions on flagged content. The
software also is backed by Global Internet Forum to Counter Terrorism (GIFCT),
which was founded in 2017 by Facebook, Microsoft, Twitter, and Google-owned
YouTube with a promise to work with smaller companies to remove known terrorist
content through a shared hashing database. The seemingly duplicative efforts by
the GIFCT and by Google/Jigsaw raises questions and concerns about the
legitimacy and effectiveness of their pledge to keep terrorists of their sites.



The series of content moderation measures also shows how government regulation
can work to compel companies to act and innovate to ensure the safety of their
platforms and users. Big tech firms have continuously argued against new laws
and measures aimed at ensuring the removal of terrorist material—insisting that
their existing technology and other industry efforts were sufficient. Google’s
latest announcement, however, seems to reveal that such previously announced
technologies and efforts were insufficient to adequately keep terrorist content
off their sites and likely exposed the companies to liability under the new EU
and U.K. laws. Once again, the tech industry only acts when compelled by legal
obligation or the prospect of reputational harm.



Indeed, despite assurances otherwise, the GIFCT and its members have a history
of failing to curb extremist and terrorist content on its own platforms. In
2019, GIFCT failures were exemplified during the Christchurch, New Zealand
attack that was livestreamed on Facebook. Despite the video initially being
taken down, the video was uploaded millions of times on a variety of platforms.
Google Drive links of the livestream video were also shared on social media
platforms and YouTube videos praising Brenton Tarrant have been located by
Counter Extremism Project (CEP) researchers. The GIFCT’s disappointing track
record and inability to coordinate content moderation practices across sites
puts the public’s safety at risk.



“Unifying content moderation efforts across platforms of all sizes is
important," says said UC Berkeley professor and CEP Senior AdvisorDr. Hany Farid
<[link removed]>. “Smaller platforms,
however, have smaller problems, and Google and the other large platforms that
make up the GIFCT should focus on developing and deploying more advanced
content moderation to more effectively remove terrorist and extremists from
their platforms. Given the continued failure to rein in these online abuses,
government regulation is required to compel the titans of tech to act
responsibly.”



###





Unsubscribe
<[link removed]>
Screenshot of the email generated on import

Message Analysis

  • Sender: Counter Extremism Project
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a
  • Email Providers:
    • Iterable