In early July, Facebook began testing a pilot program on its platform asking
some users if they had been “exposed to harmful extremist content” or “co
<[link removed]>
<[link removed]>
Tech & Terrorism: Facebook’s “Redirect Initiative” Puts Onus On Users To
Identify Extremism
(New York, N.Y.) — In early July, Facebook began testing
<[link removed]> a pilot program on its platform
asking
<[link removed]>
some users if they had been “exposed to harmful extremist content” or
“concerned that someone you know is becoming an extremist.” Called the
“Redirect Initiative,” the pop-up alerts take users to a support page if they
choose to proceed. The program marks Facebook’slatest effort
<[link removed]>
to fend off more than a decade of criticism for the misuse of its platform,
particularly after the perpetrator of the Christchurch mosque shootings
livestreamed
<[link removed]>
his attack on Facebook Live and when far-right groups were found to have
promoted violence
<[link removed]>
during the 2020 U.S. presidential election using Facebook groups and pages.
“The Redirect Initiative is Facebook’s latest half measure to tackle extremism
on its platform in which users are asked to do the policing instead of the
companies themselves,” said Counter Extremism Project (CEP) Executive Director
David Ibsen. “By putting the onus on users, Facebook is deflecting from its
responsibility to be more proactive about removing offending content. Moreover,
Facebook’s initiative ignores a crucial root cause for the spread of extremist
content—proprietary algorithms that have a perverse incentive amplify divisive
and controversial content to keep users on their sites and generate more
revenue for the company.”
CEP Senior Advisor Alexander Ritzmann in June published a policy brief on the
European Union’s Digital Services Act (DSA), titled“Notice And (NO) Action”:
Lessons (Not) Learned From Testing The Content Moderation Systems Of Very Large
Social Media Platforms
<[link removed]>
, which found that “notice and action” systems do not work as intended. “Notice
and action” relies on users to first notify the platform of illegal content,
and the platform will then determine if it should be removed. Based on six
independent monitoring reports, Ritzmann found that the overall average
takedown rate of illegal content through user notices by large social mediate
platforms was a mere 42 percent. Despite these alarming findings, however, the
draft DSA (Article 14) favors the “notice and action” mechanism as the main
content moderation system. This system unrealistically expects the 400,000,000
Internet users in the EU first to be exposed to illegal and possibly harmful
content and then to notify the platforms about it.
In a recent webinar, CEP Senior Advisor Dr. Hany Farid highlighted
<[link removed]>
the consequences with the promotion of misinformation and divisive content on
the Internet through algorithmic amplification by major tech platforms.
Regarding the role algorithmic amplification plays in the proliferation of
harmful content, Dr. Farid stated, “Algorithmic amplification is the root cause
of the unprecedented dissemination of hate speech, misinformation, conspiracy
theories, and harmful content online. Platforms have learned that divisive
content attracts the highest number of users and as such, the real power lies
with these recommendation algorithms.”
To read CEP’s report, “Notice And (NO) Action”: Lessons (Not) Learned From
Testing The Content Moderation Systems Of Very Large Social Media Platforms,
please clickhere
<[link removed]>
.
To watch a recording of CEP’s webinar, How Algorithmic Amplification Pushes
Users Toward Divisive Content, please click here
<[link removed]>
.
###
Unsubscribe
<[link removed]>