Changes Would Require Further Transparency From Tech Companies In Content
Takedowns
<[link removed]>
<[link removed]>
Tech & Terrorism: Germany’s NetzDG Content Moderation Law Undergoes Revamp
Changes Would Require Further Transparency From Tech Companies In Content
Takedowns
(New York, N.Y.) – Germany’s pioneering online content moderation law, the
Network Enforcement Act (NetzDG), will soon beupgraded
<[link removed]>
. Amendments to the law, which include requiring tech companies to proactively
report extremist content to law enforcement, simplifying the user’s ability to
flag suspected illegal content, and mandating that companies disclose how they
manage cases that occur on their platform and with what technology, are
promising steps forward in Germany’s attempt to prevent online extremism from
translating to real-world violence.
As expected, the tech industry and its lobbyists have continued to pursue
misleading arguments in light of efforts to strengthen the NetzDG law. Chief
among the complaints are that it would stifle free speech and stifle
innovation. However, those concerns are unfounded. Ajoint report
<[link removed]>
between CEP and the Centre for European Policy Studies (CEPS) found that six
months after NetzDG’s implementation, the law did not result in a flood of
reports or over-blocking, but rather a trickle of takedown requests. The study
also found that the expense of implementing NetzDG was minimal at 1 percent of
total revenue.
Others have even suggested that a tougher NetzDG could be used as a model of
inspiration for authoritarian governments to curb political dissent. However,
following through with thisfallacious argument
<[link removed]>
would mean that democratic governments would never enact any laws out of fear
of their misuse. Democratically made laws by responsible governments do not
embolden authoritarian states. Authoritarian ideologies embolden authoritarian
states.
Earlier this year, the Counter Extremism Project (CEP) in Berlin released its
recommendations for a “NetzDG 2.0
<[link removed]>
” after testing big tech’s compliance with Germany’s 2018 NetzDG online content
moderation law. In order to make social media safer, tech companies must make
transparent the functions, resources, and results of their internal compliance
systems, including the corresponding automated detection techniques as well as
processes for content moderators.
The most current version of the law requires online platforms to remove
“manifestly illegal” content within 24 hours only after it has been reported by
users. However, as CEP found, more can be done. The March 2020 study revealed
that YouTube, Facebook, and Instagram removed a mere 43.5 percent of clearly
extremist and terrorist content, even after that material was reported for
their illegal nature under the NetzDG. Of those companies studied, YouTube has
been least compliant with the law’s requirements. The company blocked only 35
percent of the 80 videos that were reported and should have been blocked.
Facebook and Instagram deleted or blocked all of the flagged content, but
Facebook did not remove any content that was explicitly not flagged—even though
that content contained the same reported illegal symbols. The procedural logic
of "notice and take down" on which the NetzDG is based requires a systematic
and continuous search for manifestly illegal content online and its subsequent
reporting so that it can take effect.
###
Unsubscribe
<[link removed]>