From xxxxxx <[email protected]>
Subject How Meta’s Policy Updates Could Encourage Hate and Threaten Democracy
Date January 26, 2025 1:35 AM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
[[link removed]]

HOW META’S POLICY UPDATES COULD ENCOURAGE HATE AND THREATEN
DEMOCRACY  
[[link removed]]


 

Lindsey Shelton
January 24, 2025
Southern Poverty Law Center
[[link removed]]


*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]

_ The changes by Meta, the parent company of Facebook, Instagram,
Threads and WhatsApp, could have far-reaching and dire effects. _

Meta CEO Mark Zuckerberg, center, attends the inauguration ceremony
for Donald Trump at the U.S. Capitol on Jan. 20, 2025., (Credit: Kenny
Holston)

 

Meta CEO Mark Zuckerberg recently announced significant changes to how
the company will moderate its social media content. The changes could
have far-reaching and dire effects for democracy and for people who
have historically been targeted by online hate, experts and advocates
say.

Meta, the parent company of Facebook, Instagram, Threads and WhatsApp,
revised its “hateful conduct” policy
[[link removed]] to
remove the term “hate speech” and protections against online hate
targeted at LGBTQ+ people, women, immigrants, people of color and
other groups. Advocacy group GLAAD cited some examples
[[link removed]] of
language that is now permitted on the platforms, including:

* Claims that LGBTQ+ people are “mentally ill” or
“abnormal.”
* Calling trans and nonbinary people “it.”
* References to “women as household objects or property.”

Meta is also ending its program that enlisted third-party
fact-checking partners to flag content that did not pass fact checks
[[link removed]].
Zuckerberg said in a video announcement that “fact-checkers have
been too politically biased” and have “destroyed more than they
created,” echoing language President Donald Trump and other
Republicans have used
[[link removed]] for
years to attack fact-checking. _The New York Times_ reported
[[link removed]] that
fact-checking groups who worked with Meta took issue with
Zuckerberg’s characterization and said “they had no role in
deciding what the company did with the content that was
fact-checked.”

Meta will now employ a “Community Notes” model
[[link removed]] that
relies on users to submit information about content and does not apply
to paid ads. The network X, formerly Twitter, also uses a community
notes model
[[link removed]].

The Southern Poverty Law Center and other civil and human rights
organizations
[[link removed]] are
concerned that Meta’s new approach will endanger democracy and the
safety of users.

“Fighting misinformation, disinformation and hate speech should be a
top priority of both traditional and social media companies,” SPLC
President and CEO Margaret Huang said. “Failing to do so is
undermining our democracy and threatens the safety of their users.
Instead of mimicking the moves of other companies that have sadly
allowed their platforms to devolve, Meta had an opportunity to lead
with their professed values. Unfortunately, it seems that Meta’s
leadership prioritizes political access more than organizational
values.”

Pasha Dashtgard is the director of research for the Polarization and
Extremism Research and Innovation Lab (PERIL) at American University.
Dashtgard said the effects of Meta’s “explicit commitment to not
countering misinformation” will be felt by everyone.

“This couldn’t be worse, frankly, for us as a democracy and for
all of us,” Dashtgard said, noting that “increased polarization
lays the groundwork for authoritarian undermining of democracy.”

“Bad-faith actors, extremist groups and individuals, organizations
and institutions that are trying to undermine democracy couldn’t be
happier that we are removing the guardrails on our social media
platforms and in order to facilitate the creation of fully enclosed
online ecosystems that are able to disseminate misinformation and
motivate people to actively undermine democracy through participation
in antigovernment militias and extremist groups,” he said.

‘Very good news,’ Trump says

Zuckerberg’s announcement about Meta’s policy updates came just
two weeks before Trump was inaugurated for his second presidential
term on Jan. 20. The announcement followed multiple visits by
Zuckerberg to Trump’s Mar-a-Lago estate in recent months. Meta also
contributed $1 million to Trump’s inauguration, hired longtime
Republican lobbyist Joel Kaplan to lead the company’s global policy
team, and appointed UFC president and Trump ally Dana White to the
Meta board, according to ABC News
[[link removed]].

Just four years ago, Zuckerberg banned Trump from Facebook and
Instagram, saying the risks of allowing Trump on the platforms were
“simply too great” after he repeatedly used the sites to broadcast
election lies and cheer on the Jan. 6, 2021, insurrection, according
to CNN
[[link removed]].
Shortly after Meta publicized the new policy updates, Trump called the
announcement “very good news
[[link removed]]”
at a press conference and said he thought the changes were
“probably” a direct response to threats he has made to Zuckerberg
in the past.  

Zuckerberg said Meta will now focus on “restoring free expression
on our platforms.”
[[link removed]] The
changes seem to effectively end Meta’s years-long attempts at
contending with the tension between free expression and civility.

Zuckerberg attended the inauguration along with other prominent
figures in tech, 
[[link removed]]such
as Amazon founder Jeff Bezos and CEOs that included Sundar Pichai of
Google; Tim Cook of Apple; Shou Zi Chew of TikTok; and Elon Musk of
Tesla and SpaceX. Musk, the owner of X, has also been tapped by Trump
as a leader for the new Department of Government Efficiency
[[link removed]].

‘Flood of propaganda’

Pew Research Center statistics
[[link removed]] show
that 68% of U.S. adults use Facebook and 47% use
Instagram. Approximately 54%
[[link removed]] of
adults say they at least sometimes get news from social media, up
slightly from recent years. Dashtgard said he expects distinguishing
between genuine content and misinformation, disinformation and fake
content could become even more difficult for users.

“What you’re going to see is a flood of propaganda, hate speech
and extremism in the form of videos, memes, fake AI-generated
articles, AI-generated images, videos and audio that are going to be
indistinguishable from genuine content,” he said. “It’s going to
lead to an information collapse where, really, what’s going to
happen is people are no longer going to be able to adequately
determine what is real from not real.”

The result, Dashtgard said, could be media ecosystems that exist
exceedingly to confirm preexisting worldviews, biases and stereotypes
regardless of political affiliation.

“But when we’re talking about oppressed and marginalized
communities … there is a more dangerous implication that …
you’re going to be feeding people information that is going to
justify violence and discrimination and hate against marginalized
groups,” he said.

While the argument is often made that reducing content moderation and
removing restrictions on hate activities enables “free speech” and
online public forums of ideas, Dashtgard said that is not how it plays
out.   

“What actually happens is that the most extreme, the most negative,
the content that elicits hate and fear and anger are the stories that
are going to be elevated the most,” he said.

He pointed to psychological science and research indicating that
people tend to be drawn to more extreme political positions than they
hold and that they are more predisposed to negative content. Research
shows
[[link removed]] content
that elicits anger spreads faster than content linked to any other
emotion on social media networks.

‘Check the facts before you believe’

SPLC Intelligence Project Interim Director Rachel Carroll Rivas said
Meta had made progress in controlling harmful content related to
hard-right extremism and hate and antigovernment activities, notably
banning some hard-right public figures and deleting thousands of
militia groups and pages. Still, Meta’s content moderation has never
adequately addressed harmful content consistently present on the
platform, Carroll Rivas said. Particularly concerning now, she said,
is whether the lines are blurring between alt-tech platforms like Gab,
Rumble and Telegram and mainstream platforms like Facebook, Instagram
and X.

Alt-tech spaces were set up as purposely unmoderated to cater to users
who wanted an unmoderated space, Carroll Rivas said.

“That led itself to being mostly then a space for people to say and
act on really divisive, violent, discriminatory, biased ideas and
content and (hard-right) organizing,” Carroll Rivas said. “Some of
that same activity did happen in mainstream social media spaces, but
with some content moderation and setting the standard that it was not
going to be allowed, there was a lot less of that. … So, the
question is, is that line between mainstream social media and alt-tech
gone?”

Huang encourages social media users to approach content critically and
thoughtfully to avoid spreading or engaging with misinformation and
disinformation.

“As you engage with companies that have abandoned their commitment
to ensuring accurate and fact-based information, be mindful of what
and who you interact with,” Huang said. “Have healthy skepticism
of the individuals you follow and the posts that go viral, so that you
don’t unwittingly participate in an ecosystem of misinformation.
Check the facts, before you believe or repost.”

Young people are particularly vulnerable to being targeted by harmful
and hateful content on social media. All trusted adults have an
important role to play in supporting young people through polarizing
times and in developing skills and knowledge to guard against online
radicalization, Dashtgard said. SPLC and PERIL offer guides for
parents, caregivers, educators and others
[[link removed]] to help prevent
youth radicalization and build resilient, inclusive communities. 

The SPLC’s Learning for Justice (LFJ) K-12 Digital Literacy
Framework
[[link removed]] offers
a guide for educators supporting students in developing digital and
civic literacy skills. The framework covers seven key areas for
learning and includes lessons, classroom-ready videos and a
corresponding podcast, _The Mind Online_
[[link removed]]. More
than ever, supporting young people in becoming responsible digital
citizens is critical, LFJ Director Jalaya Liles Dunn said.

“Young people have the power to shape how we cultivate an
environment that centers critical thinking and civic
responsibility,” she said.  

Prevention is the key to helping people of all ages avoid adopting
hateful ideologies based on harmful online content, Dashtgard said.
Resources from SPLC and PERIL, including the new report _Not Just a
Joke: Understanding & Preventing Gender- and Sexuality-Based Bigotry_,
[[link removed]] can
offer knowledge and strategies for approaching conversations.

“Prevention is the silver bullet,” Dashtgard said. “It’s so
much harder to walk somebody back once they have fallen down a rabbit
hole of conspiracy theories and hateful ideologies. It is paramount
that we offer knowledge and effective strategies for how to prevent
people from being radicalized in the first place.”

* Meta
[[link removed]]
* Mark Zuckerberg
[[link removed]]
* hate speech
[[link removed]]
* democracy
[[link removed]]

*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]

 

 

 

INTERPRET THE WORLD AND CHANGE IT

 

 

Submit via web
[[link removed]]

Submit via email
Frequently asked questions
[[link removed]]
Manage subscription
[[link removed]]
Visit xxxxxx.org
[[link removed]]

Twitter [[link removed]]

Facebook [[link removed]]

 




[link removed]

To unsubscribe, click the following link:
[link removed]
Screenshot of the email generated on import

Message Analysis

  • Sender: Portside
  • Political Party: n/a
  • Country: United States
  • State/Locality: n/a
  • Office: n/a
  • Email Providers:
    • L-Soft LISTSERV