[How are authoritarians using artificial intelligence for
political repression? Steven Feldstein on the global spread of
anti-democratic applications for innovative technologies.]
[[link removed]]
DARK ARTS
[[link removed]]
Interview by J.J. Gould
May 18, 2023
The Signal
[[link removed]]
*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]
_ How are authoritarians using artificial intelligence for political
repression? Steven Feldstein on the global spread of anti-democratic
applications for innovative technologies. _
, Photo by Denley
IN MARCH, THE REUTERS NEWS AGENCY published a review
[[link removed]] of
more than 2,000 Russian court cases showing security-camera footage
and facial-recognition technology having been used in the arrests of
hundreds of people. Initially, authorities were using the technology
to identify and detain people who’d joined various anti-government
demonstrations, but after the invasion of Ukraine last year, they
started using it to intercept protesters and prevent them from
demonstrating at all. Now they’re using it to spot and whisk away
opponents of the Kremlin whenever they want. It’s a remarkable
story—and just one in a developing pattern of autocratic regimes
using technologies powered by artificial intelligence to clamp down on
their populations. What’s the extent of all this?
STEVEN FELDSTEIN is a senior fellow at the Carnegie Endowment for
International Peace and the author of _The Rise of Digital
Repression: How Technology Is Reshaping Politics, Power, and
Resistance_ [[link removed]]_._ As
Feldstein explains, repression-enabling AI applications have become
key elements of the authoritarian repertoire globally. Autocrats have
invested heavily in them because, although they’ve insulated their
power and rolled back democratic movements in recent years, they still
understand that the biggest enduring threat to that power in the
contemporary world is their own people, either rising up in
revolutions or voting them out in elections. And the biggest emerging
opportunity to control people is by connecting the digital
environments they increasingly live in to state surveillance systems
powered by AI.
_This article is part of a __series_
[[link removed]]_ in
partnership with the __Human Rights Foundation_
[[link removed]]_. Feldstein will be a speaker at
the __Oslo Freedom Forum_
[[link removed]]_ in
June._
J.J. GOULD: How have autocratic authorities been using artificial
intelligence?
STEVEN FELDSTEIN: There was a moment in the early 2010s when new
digital-information and -communication platforms—social-media
applications especially—had started to play this remarkable role in
helping civilians around the world mobilize and challenge the
autocracies they were living under. We saw this in the color
revolutions in post-Soviet Eurasia through the Arab Spring. And it led
to a lot of optimism that these _liberation technologies_—as they
were called—would help propel a new wave of democratic revolutions
globally.
What’s actually happened, alas, is that autocratic governments have
figured out how to use new digital technologies—AI applications
especially—to repress their citizens more effectively, undercut
emerging liberation movements, and reinforce autocratic political
power. We’ve seen this above all in China—but also in Russia, the
Gulf states, and other authoritarian and illiberal regimes. The range
of applications has been expanding, but a few use cases stand out.
One is tracking popular discontent and, when it comes to it,
controlling mass protest. That can work in a number of different ways.
It can work on a mass scale through automated social-media monitoring,
interpreting what people are thinking from what they’re saying
online. It can work through public-surveillance cameras and other ways
of seeing when and where people are gathering—and then preempting
political demonstrations or arresting people who participate in them.
We’ve seen this increasingly in Russia, for example, in these
techniques Moscow uses to pick up and neutralize anti-war protesters.
A second use case is maintaining control in an area of a country where
the state is experiencing unrest. China’s Xinjiang province, where
there’s ongoing dissatisfaction and pushback against Beijing among
the region’s Uyghur population, is a prominent example. Here,
Chinese authorities continue to use traditional autocratic repression
tactics, including brutal reeducation camps. But they’re also
supplementing these traditional tactics with advanced machine-learning
technologies—facial-recognition platforms, biometric scanning,
genomic surveillance, and so on—which the regime can integrate with
an information-management system that enables predictive policing
carried out by tens of thousands of security officers.
A third use case is super-enhancing propaganda and disinformation.
Where an autocratic regime has a constitutional obligation to hold
formal elections, for instance, and where it might have conventionally
used methods of rigging those elections like ballot-stuffing or voter
suppression, it’s now more and more likely to augment those methods
with AI technology. It can now collaborate with bot and troll armies
to spread approved messaging. It can identify and engage key
social-media influencers. It can leverage social-media platforms to
push out automatic, hyper-personalized disinformation campaigns. And
it can use deep-fake technology to generate ever-more realistic audio
and video forgeries to discredit challengers.
GOULD: Why the demand for repression-enabling AI technology? Haven’t
traditional autocratic means of repression worked just fine in
preventing a new wave of democracy globally?
FELDSTEIN: From the end of World War II through the late 1980s, the
most common way for an autocrat to lose power was a coup—which is to
say, by being forced out by elite competitors internal to the regime.
After the Cold War, the threat started to shift toward popular
challengers external to the regime—namely, toward mass revolts or
electoral defeats. The implication for autocrats has been a need to
focus on controlling those challengers—by repressing popular civic
movements and manipulating elections.
There’s a very direct logic aligning this perceived need with AI
technologies. The application of these technologies for political
repression can be expensive to develop—but in the end, not as
expensive, in terms of either resource costs or political risks, as
relying wholly on the manual work, as it were, of old-fashioned
security forces. So in the context of the shifting threat to
autocrats, AI technologies have improved both the effectiveness and
efficiency of political repression.
GOULD: What about the supply side of this technology’s
proliferation? To what extent is it being driven by Chinese
innovation—and maybe an incentive to market it to autocrats
globally, enhancing China’s influence and power?
FELDSTEIN: When we look at the incentives driving the export of these
technologies, there are push and pull dynamics. On the one hand,
Beijing has readily made these tools available to other autocratic
rulers, both by demonstrating what’s possible to acquire and by
subsidizing its acquisition. And China has seen benefits from
that—directly in economic influence and indirectly in political
influence.
On the other hand, these other autocratic rulers already want these
tools and are eager to come to those who sell them, whether they’re
Chinese companies or not. U.S., European, and Israeli companies also
manufacture and sell surveillance equipment, spyware, and other tools
for controlling populations.
China has been an innovator in repression-enabling technology, but I
wouldn’t say it’s been _the_ innovator. It’s certainly been
effective at pushing this technology forward with autocrats globally,
but companies from the U.S., Europe, and elsewhere have been in the
game to different degrees, as well, all along the way. And as
societies all over the world become more digital, as people spend more
time online and on digital devices, as all of this becomes more and
more central to how human beings live their lives, it becomes a new
vector for potential control. It becomes a critical arena for
governments to monitor and manage citizens.
So now you have governments all over the world wanting to buy the
technologies that enable this monitoring and management, and you have
companies all over the world manufacturing and selling them. Autocrats
from the Gulf, like Qatar, or from the Middle East, like Egypt, or
from South Asia, like Pakistan, are all to some extent importing
Chinese technologies. But they’re also importing technologies from
companies in other countries, many of which are democratic. The world
has become a lot more complicated in that way.
GOULD: How do you see the dangers of repression-enabling AI
technologies in democratic countries?
FELDSTEIN: A lot of it still remains prospective, but we’ve
certainly already seen these tools being used in conjunction with
social-media platforms by autocratic states to disrupt elections and
undermine social trust among democratic rival states. We’ve also
seen them being used by illiberal politicians, often on the far right,
to propagate conspiratorial messaging and extremist signals, to
mobilize supporters and even in some cases to stoke political
violence.
Photo by Dekler
The “Stop the Steal” movement in the U.S., following the 2020
elections there, linked to the January 6 insurrection, is a case in
point. But there’ve been lots of others, particularly in Europe,
where illiberal politicians and their allies have sought to undermine
social cohesion and undercut democracy—often through messaging and
signals that target outsiders or different racial or ethnic groups.
We’ve seen this in Germany, for instance, in France, and very
notably in Hungary—where Prime Minister Viktor Orbán and his
movement have made extensive use of digital communications to solidify
their power.
It’s hard to know where these tendencies will go. But there’s
understandably a lot of concern in democratic societies about the
kinds of social control the large language models that power AI will
potentially enable—both at an industrial scale, in spreading bad
information, and in ways that are remarkably customized for persuasion
at the individual level too. There’s also a lot of concern about
ways large language models are starting to power surveillance
techniques in criminal detection and law enforcement—with the use of
these techniques already, in some cases, racing ahead without regard
to any regulations that lay out standards and norms for what’s
private or secure and what isn’t.
GOULD: Thinking about this challenge, how would you respond to the
view—as you might hear it from one or two people in the tech
industry—that public regulators just aren’t literate enough in
emerging AI technologies to be able to regulate them properly?
FELDSTEIN: I think it’s a pretty shallow argument, honestly. Some of
the most sophisticated experts on AI in the world work in agencies
like the National Institute of Standards and Technology, for example,
within the Department of Commerce. There are plenty of great people
out there willing to work in the public interest and help ensure
proper accountability in how these technologies are used. The idea
that Silicon Valley has monopolized all the expertise, and it’s
nowhere to be found among those working with the government, just
isn’t true.
GOULD: And thinking back to the issue of global supply lines for
repression-enabling technologies coming out of democratic countries,
how would you think about the role of regulation in addressing that
challenge?
FELDSTEIN: To some extent, it’s not really controllable, in that so
much of the technology in question isn’t what you’d
call _prestige technology:_ It’s not like nuclear weapons, which
very few companies can manufacture or very few countries can
acquire—where you can limit their supply and stop their
proliferation. It’s much more like conventional arms. Once the
technology is out there, there are so many different companies and
software developers and others who’re willing and able to
proliferate it.
Photo by Mor Shani
It’s true that you can, to some extent, deny the most advanced
technologies—which are the hardest to acquire—to those who’d
accrue the most damage with them against their citizens. As an
example, in the U.S., an executive order on spyware recently came out,
which I’ve been very supportive of, that would deny any kind of
market ability within the U.S. to companies selling these tools to
regimes with bad human-rights records. That creates a big incentive to
stop doing it. Does this mean those countries won’t be able to
attain the tools at all? Probably not. They can probably get something
like them from someone else. But it makes it harder, it makes it more
expensive, and it probably thwarts these countries from being able to
use the most top-of-the-line capabilities they’d otherwise want to
use.
So measures like this are good, as far as they can go. But on the
whole, it’s difficult to meet the proliferation challenge from the
supply side. It’s a problem that requires a more comprehensive
solution. Part of that has to be supporting global norms that make it
harder for autocratic states to violate the privacy of their citizens
with digital tools. But part of it also has to be democratic societies
doing a better job of modeling those norms themselves.
Some of the biggest abuses and scandals around these technologies
relate, for instance, to an illegal database created by an American
company, with billions of images taken from social media and the
internet, and sold to thousands of government and law enforcement
agencies around the world; or to drone surveillance using European
technology being used to monitor protests or activities on the border
in the United States; or to spyware provided by an Israeli company
being used in Mexico. If democracies can’t get their own norms and
practices right here, and can’t manage them through transparent and
accountable governance, we’re not going to be effective at getting a
handle on the situation in autocracies.
GOULD: How optimistic or pessimistic are you about the course of AI
technology in the world now?
FELDSTEIN: I’m pessimistic in the sense that I don’t think we’ve
yet fully internalized the most important lessons from other
technologies that have been introduced at mass scale in recent
years—social media, in particular—and we don’t yet have the
right rules in place to head off some of the worst harm. I’m
thinking principally of the U.S. context here, where I see us, once
again, putting a lot of trust and faith in technology companies to do
the right thing. And that hasn’t worked out very well in the past.
Photo by Joseph Chan
But I’m more optimistic in the sense that I think governments are
starting to respond more effectively—that there’s a growing extent
of awareness among policy-makers of the need to put more effective
forms of oversight in place. If you listen to the recent hearings on
the issue in the U.S. Congress—as with Sam Altman, the CEO of
OpenAI—and you compare them with what we heard a few years ago with
Meta’s Mark Zuckerberg, there’s just much more sophistication in
the conversation. And there are many more specific ideas on the table.
So I’m hopeful that something good can come out of this—that maybe
we’ve learned at least some of the lessons we need to have learned;
maybe rather than just sitting and waiting to see what happens—which
has effectively been U.S. policy on technological innovation for many
years now—we’ll be a little more proactive; we’ll follow
Europe’s lead a little more in this sense; we’ll find ourselves
saying, _Wait a minute, we need some regulation here; we need the
right capacities and capabilities to monitor how AI is being used and
to certify its applications_—rather than just hoping this stuff
doesn’t take us in the wrong direction.
_John Jamesen Gould is the editor of The Signal._
* artificial intelligence
[[link removed]]
* repression
[[link removed]]
* authoritarian regimes
[[link removed]]
*
[[link removed]]
*
[[link removed]]
*
*
[[link removed]]
INTERPRET THE WORLD AND CHANGE IT
Submit via web
[[link removed]]
Submit via email
Frequently asked questions
[[link removed]]
Manage subscription
[[link removed]]
Visit xxxxxx.org
[[link removed]]
Twitter [[link removed]]
Facebook [[link removed]]
[link removed]
To unsubscribe, click the following link:
[link removed]