From Portside Culture <[email protected]>
Subject Why Propaganda, Hate and Political Extremism Thrive in the Attention Economy
Date January 8, 2026 3:25 AM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
[[link removed]]

PORTSIDE CULTURE

WHY PROPAGANDA, HATE AND POLITICAL EXTREMISM THRIVE IN THE ATTENTION
ECONOMY  
[[link removed]]


 

Chris Featherman
November 13, 2025
LSE Review of Books
[[link removed]]


*
[[link removed]]
*
*
[[link removed]]

_ These two books reveal the urgent challenge of combatting harmful
speech and propaganda – and the real violence it leads to – in our
polarised political moment, writes reviewer Featherman. _

,

 

_Dogwhistles and Figleaves: How Manipulative Language Spreads Racism
and Falsehood_Jennifer Mather SaulOxford University PressISBN:
9780192871756

_Safe Havens for Hate: The Challenge of Moderating Online
Extremism_Tamar MittsPrinceton University PressISBN: 9780691258522 

Extremist political rhetoric as strategy 

Facing waning support and platform bans, the American far-right
political movement QAnon decided in 2020 to extend its conspiracy
theory narrative
[[link removed]]
to include claims of a global child trafficking ring, supposedly led
by US liberal elites. It did this by hijacking the #SavetheChildren
hashtag, used in a legitimate anti-trafficking campaign by an
organisation of that name, and turned it into covert extremist
messaging – a devious appropriation that, for QAnon, yielded an
energising membership spike.  

[These two] books on hate and manipulation in political discourse
provide a multidisciplinary perspective on how extremist rhetoric,
despite social and institutional guardrails, succeeds both offline and
online.

To philosopher of language Jennifer Mather Saul, this tactic resembles
the increasingly common manipulative uses of harmful political speech
that flout norms against falsehoods in public discourse. For political
scientist Tamar Mitts, who studies hate speech in social media, the
#SavetheChildren example illustrates how extremist groups
strategically shift their online messaging to skirt content
moderation. Considered together, these views, offered in their
respective recent books on hate and manipulation in political
discourse, provide a multidisciplinary perspective on how extremist
rhetoric, despite social and institutional guardrails, succeeds both
offline and online. In doing so, Saul and Mitts collectively
underscore the urgency of this challenge as well as the complexity of
addressing it. 

Weakened norms and hateful speech 

For Saul, examining political rhetoric in _Dogwhistles and Figleaves:
How Manipulative Language Spreads Racism and Falsehood_
[[link removed]],
this urgency stems not solely from the sharp upswing in harmful
political speech, but from the degree to which that rhetoric has
become normalised. This normalisation, she argues, has resulted from a
shift in the social expectations that have long constrained actors
from openly expressing racist beliefs and false ideas. And it’s the
weakening of these norms, she claims, that facilitates political
manipulation – particularly of malleable groups that, though they
may accept these norms, can nevertheless be convinced to support
political actors who spread lies and espouse racist ideologies. It’s
a problem, of course, that matters beyond deceptive vote-getting: once
“the unsayable becomes sayable,” as Saul reminds us, history has
shown that “increasingly hateful language is often a precursor to
violence and even genocide” (2).  

Dogwhistles and figleaves

Though Saul examines a range of falsehoods, from conspiracy theorist
hashtags and emojis to compliance lies and political bullshit, her
primary focus is on dogwhistles and figleaves. Dogwhistles, she
explains, are strategic communicative acts comprising two layers of
meaning:  one is understood broadly by an out-group, and the other
targets an in-group for whom the secondary layer conveys a coded
message that activates their political or racial biases. (Think, for
instance, how the term _inner city_, to the general population, would
refer to an urban space while to racists it covertly references the
marginalised groups that historically have lived in those spaces.)
Often a dogwhistle is paired with a figleaf, which, as its name
suggests, “provides cover for another utterance that [otherwise]
would be recognised as racist” (71) – for instance, calling a
racial slur, once spoken, _a joke_ or attaching to extremist
insinuations the tag phrase _or that’s what people are saying_. Used
together in political discourse, dogwhistles and figleaves, Saul
argues, form “a powerful mechanism for changing standards of
acceptable utterances” (86), one that makes it easier for false and
racist language to circulate and thus “play an important role in
dividing the populace and inflaming divisions” (104).  

The digital resilience of hate groups 

Such speech, we’ve come to know, readily proliferates on social
media. Platforms, in response, have upped their efforts to remove such
content and, in egregious cases, de-platform those who propagate it.
Yet, as Mitts explains in _Safe Havens for Hate: The Challenge of
Moderating Online Extremism_
[[link removed]],
militant and hate groups like the Islamic State and the Proud Boys
have shown remarkable digital resilience, subverting content
moderation policies to continue spreading hate and attracting support.
Studying the online behaviour of such groups, she shows how they
leverage divergences in content moderation across platforms to thrive
online. Mitts couples these insights with threshold analyses of
content moderation frameworks, implemented across platform sizes,
regime types, and geographical contexts, to argue that “militant and
hate organisations’ online success centres on their ability to
operate across many platforms in parallel – a phenomenon not well
captured by current legislation” (5).  

Critical to this success, Mitts shows, are three communication
tactics: migrating, mobilising, and messaging. First, when militant
organisations get banned from one social media platform, many simply
move to another. Though seemingly arbitrary, these migrations are in
fact careful calculations in which organisations, exploiting the
differences in content moderation policies across platforms, weigh the
trade-off between two key communication goals: authenticity and
impact. That is, they accept operating on a platform with a smaller
audience and reach – Gab or Parler, for instance, instead of
Facebook or YouTube – if that platform’s more lenient moderation
policies allow them to retain more of their violent, hateful, and thus
for them, authentic content.

According to her analysis, large-scale governmental interventions like
the EU Digital Services Act have shown limited effectiveness in
removing harmful content, as have efforts, predominantly in the US, to
incentivise social media platforms to self-regulate.

Having moved to smaller platforms, these groups then seek to mobilise
supporters similarly drawn to less moderated spaces. It’s a calculus
in which what’s lost through migration in audience size is gained
from access to individuals more susceptible to extremist propaganda
– typically those aggrieved over narratives of political animus and
cultural displacement. Alternatively, to remain on highly moderated
platforms, extremist organisations will simply shift their messaging.
Whether softening their content away from violence towards, as in the
case of the Taliban, governance and civilian affairs or using covert
language of the type Saul documents, this shift allows extremist
groups to elude moderation and thereby reach larger audiences – or
simply steer them to a less-moderated space. In the case of the
Islamic State, Mitts shows how they have even hidden propaganda in
appropriated content or added digital noise to text and images to
throw off the artificial intelligence moderation algorithms deployed
by large platforms. 

The challenges of content moderation 

To fight such sophisticated and formidable resilience, Mitts sees
convergence – inter-platform cooperation and alignment to moderate
content – as a potentially powerful countermeasure. Yet she never
fully commits to it as a solution. According to her analysis,
large-scale governmental interventions like the EU Digital Services
Act
[[link removed]]
have shown limited effectiveness in removing harmful content, as have
efforts, predominantly in the US, to incentivise social media
platforms to self-regulate. Further, while instances of cross-platform
convergence have reduced the official online presence of extremist
groups, their unofficial presence, through unaffiliated accounts
disseminating their content, remains largely unaffected by
moderation.  

Importantly, Mitts also highlights the collateral damage of content
moderation and how convergence can compound it. These risks range from
the inadvertent removal of non-harmful content – an error which
occurs, she notes, at a higher rate for historically marginalised
groups – to the misuse of the domestic terrorism classification in
moderation policies as a means, by governments, to suppress dissent.
These challenges appear to explain Mitts’ support for a
public-private approach in which “governments put pressure on
platforms and civil society organisations […] make concerted efforts
to facilitate collaboration between them” (155). It’s a strategy
that resembles certain frameworks for combatting disinformation
[[link removed]]: pragmatic,
institutional, and thus evidencing a persisting belief in liberal
rationality. Amid normalised celebrity politics, a splintered public
sphere, and a strained social contract, it’s a belief some might see
as bullish.   

Political lies in the attention economy

Despite the emergence of the so-called post-truth era, “political
lies,” as Saul reminds us, “are nothing new” (116). Nor is it
news that political actors, whether from the centre or the extremes,
have long used harmful language to galvanise and divide, exploiting
existing societal and cultural rifts. Disagreements, for instance,
over what even constitutes harmful speech, covert or overt, exacerbate
the challenges of moderating it, a problem thus far not readily solved
by AI
[[link removed]].
What’s changed, then, are the incentive structures that, as social
and political philosophy scholar Adam Gibbons has argued
[[link removed]],
make bad language good politics. The harmful content that both Mitts
and Saul examine has become so ubiquitous because, to put it bluntly,
it trends so well in our emotion-fuelled attention economies. 

Should we continue working to understand those grievances, together
with their root causes, or risk validating them in doing so? Do we
struggle to shore up not only the rules but also the norms that deter
speakers from producing and circulating propaganda?

And given the fragile balance between preventing harm and protecting
free speech, as Saul and Mitts in their own ways each reaffirm, it’s
little surprise that content moderation has become a partisan issue.
Equally unsurprising, but arguably more urgent, is the point that
Mitts and Saul, employing diverse methods at differing scale levels,
jointly arrive at: that it’s the aggrieved who are most susceptible
to propaganda. And so, as the aggrieved increasingly manifest their
resentments beyond language through violence, the ways that Saul and
Mitts multiply attend to the intersections between harmful speech and
manipulation are among their most timely contributions.  

A turn in the fight against hateful content

But what’s next? Should we continue working to understand those
grievances, together with their root causes, or risk validating them
in doing so? Do we struggle to shore up not only the rules but also
the norms that deter speakers from producing and circulating
propaganda? Is the solution to be found, as Saul advocates, in the
kind of pre-bunking inoculation strategies
[[link removed]] that have shown
some success in combatting misinformation
[[link removed]]?
Or do we instead, as scholar of philosophy and language Marina Sbisà
has suggested [[link removed]], attend to the
ethical responsibility of the listener to reject harmful speech? These
debates might be as fraught, interminable, and necessary as any the
public sphere has thus far afforded – ones that, despite its deep
flaws and widening fractures, just might warrant its protection and
preservation.

_NOTE: This review gives the views of the author and not the position
of the LSE Review of Books blog, nor of the London School of Economics
and Political Science._

 

_Chris Featherman, PhD, lectures in the Comparative Media
Studies/Writing Department at the Massachusetts Institute of
Technology. He writes about language and ideology in the public
sphere, most recently for the Los Angeles Review of Books. He is the
author of Discourses of Ideology and Identity: Social Media and the
Iranian Election Protests._

* Propaganda
[[link removed]]
* Racism
[[link removed]]
* misinformation
[[link removed]]
* Online Extremism
[[link removed]]

*
[[link removed]]
*
*
[[link removed]]

 

 

 

INTERPRET THE WORLD AND CHANGE IT

 

 

Submit via web
[[link removed]]

Submit via email
Frequently asked questions
[[link removed]]
Manage subscription
[[link removed]]
Visit portside.org
[[link removed]]

Bluesky [[link removed]]

Facebook [[link removed]]

 



########################################################################

[link removed]

To unsubscribe from the xxxxxx list, click the following link:
[link removed]
Screenshot of the email generated on import

Message Analysis