[ Donald Trump publicly attacked me. Two years later, following
his acquisition of Twitter and after I resigned my role as the
company’s head of trust and safety, Elon Musk added fuel to the
fire. I’ve lived with armed guards outside my home]
[[link removed]]
TRUMP ATTACKED ME. THEN MUSK DID. IT WASN’T AN ACCIDENT.
[[link removed]]
Yoel Roth
September 18, 2023
New York Times
[[link removed]]
*
[[link removed].]
*
[[link removed]]
*
*
[[link removed]]
_ Donald Trump publicly attacked me. Two years later, following his
acquisition of Twitter and after I resigned my role as the company’s
head of trust and safety, Elon Musk added fuel to the fire. I’ve
lived with armed guards outside my home _
Donald Trump and Elon Musk, after Musk restored Trump's Twitter
account May 2022, Image: Reuters, Agence France-Presse (AFP) // India
Today
When I worked at Twitter, I led the team
[[link removed]] that placed a fact-checking
label on one of Donald Trump’s tweets for the first time. Following
the violence of Jan. 6, I helped make the call to ban his account from
Twitter altogether. Nothing prepared me for what would happen next.
Backed by fans on social media, Mr. Trump publicly attacked me. Two
years later, following his acquisition of Twitter and after
I resigned my role
[[link removed]] as
the company’s head of trust and safety, Elon Musk added fuel to the
fire. I’ve lived with armed guards outside my home and have had to
upend my family, go into hiding for months and repeatedly move.
This isn’t a story I relish revisiting. But I’ve learned that what
happened to me wasn’t an accident. It wasn’t just personal
vindictiveness or “cancel culture.” It was a strategy — one that
affects not just targeted individuals like me, but all of us, as it is
rapidly changing what we see online.
Private individuals — from academic researchers to employees of tech
companies — are increasingly the targets of lawsuits, congressional
hearings and vicious online attacks. These efforts, staged largely by
the right, are having their desired effect: Universities are cutting
back on efforts to quantify abusive and misleading information
spreading online. Social media companies are shying away from making
the kind of difficult decisions my team did when we intervened against
Mr. Trump’s lies about the 2020 election. Platforms had finally
begun taking these risks seriously only after the 2016 election. Now,
faced with the prospect of disproportionate attacks on their
employees, companies seem increasingly reluctant to make controversial
decisions, letting misinformation and abuse fester in order to avoid
provoking public retaliation.
These attacks on internet safety and security come at a moment when
the stakes for democracy could not be higher. More than 40 major
elections are scheduled to take place in 2024, including in the United
States, the European Union, India, Ghana and Mexico. These democracies
will most likely face the same risks of government-backed
disinformation campaigns and online incitement of violence that have
plagued social media for years. We should be worried about what
happens next.
My story starts with that fact check. In the spring of 2020, after
years of internal debate, my team decided that Twitter should apply a
label
[[link removed]] to
a tweet of then-President Trump’s that asserted that voting by mail
is fraud-prone, and that the coming election would be “rigged.”
“Get the facts about mail-in ballots,” the label read.
On May 27, the morning after the label went up, the White House senior
adviser Kellyanne Conway publicly identified me
[[link removed]] as
the head of Twitter’s site integrity team. The next day, The New
York Post put several of my tweets making fun of Mr. Trump and other
Republicans on its cover. I had posted them years earlier, when I was
a student and had a tiny social media following of mostly my friends
and family. Now, they were front-page news. Later that day, Mr. Trump
tweeted that I was a “hater
[[link removed]].”
Legions of Twitter users, most of whom days prior had no idea who I
was or what my job entailed, began a campaign
[[link removed]] of
online harassment that lasted months, calling for me to be fired,
jailed or killed. The volume of Twitter notifications crashed my
phone. Friends I hadn’t heard from in years expressed their concern.
On Instagram, old vacation photos and pictures of my dog were flooded
with threatening comments and insults. (A few commenters, wildly
misreading the moment, used the opportunity to try to flirt with me.)
I was embarrassed and scared. Up to that moment, no one outside of a
few fairly niche circles had any idea who I was. Academics studying
social media call this “context collapse
[[link removed]]”:
things we post on social media with one audience in mind might end up
circulating to a very different audience, with unexpected and
destructive results
[[link removed]].
In practice, it feels like your entire world has collapsed.
The timing of the campaign targeting me and my alleged bias suggested
the attacks were part of a well-planned strategy. Academic studies
[[link removed]] have repeatedly
[[link removed]] pushed back
[[link removed]] on
claims that Silicon Valley platforms are biased against conservatives.
But the success of a strategy aimed at forcing social media companies
to reconsider their choices may not require demonstrating actual
wrongdoing. As the former Republican Party chair Rich Bond once
described, maybe you just need to “work the refs”: repeatedly
pressure companies into thinking twice before taking actions that
could provoke a negative reaction. What happened to me was part of a
calculated effort to make Twitter reluctant to moderate Mr. Trump in
the future and to dissuade other companies from taking similar steps.
It worked. As violence unfolded at the Capitol on Jan. 6, Jack Dorsey,
then the C.E.O. of Twitter, overruled Trust and Safety’s
recommendation that Mr. Trump’s account should be banned because of
several tweets, including one that attacked Vice President Mike Pence
[[link removed]].
He was given a 12-hour timeout
[[link removed]] instead
(before being banned on Jan. 8
[[link removed]]).
Within the boundaries of the rules, staff members were encouraged to
find solutions to help the company avoid the type of blowback that
results in angry press cycles, hearings and employee harassment. The
practical result was that Twitter gave offenders greater latitude:
Representative Marjorie Taylor Greene was permitted to violate
Twitter’s rules at least five times before one of her accounts
was banned in 2022
[[link removed]].
Other prominent right-leaning figures, such as the culture war account
Libs of TikTok, enjoyed similar deference
[[link removed]].
Similar tactics are being deployed around the world to influence
platforms’ trust and safety efforts. In India, the police visited
[[link removed]] two
of our offices in 2021 when we fact-checked
[[link removed]] posts
[[link removed]] from
a politician from the ruling party, and the police showed up at an
employee’s home
[[link removed]] after
the government asked us to block accounts involved in a series of
protests. The harassment again paid off: Twitter executives decided
any potentially sensitive actions in India would require top-level
approval, a unique level of escalation of otherwise routine decisions.
And when we wanted to disclose a propaganda campaign operated by a
branch of the Indian military, our legal team warned us that our
India-based employees could be charged with sedition — and face the
death penalty if convicted. So Twitter only disclosed the
campaign over a year later
[[link removed]],
without fingering the Indian government as the perpetrator.
In 2021, ahead of Russian legislative elections, officials of a state
security service went to the home of a top Google executive in Moscow
to demand the removal
[[link removed]] of
an app that was used to protest Vladimir Putin. Officers threatened
her with imprisonment if the company failed to comply within 24 hours.
Both Apple and Google removed the app
[[link removed]] from
their respective stores, restoring it after elections had concluded
[[link removed]].
In each of these cases, the targeted staffers lacked the ability to do
what was being asked of them by the government officials in charge, as
the underlying decisions were made thousands of miles away in
California. But because local employees had the misfortune of residing
within the jurisdiction of the authorities, they were nevertheless the
targets of coercive campaigns, pitting companies’ sense of duty to
their employees against whatever values, principles or policies might
cause them to resist local demands. Inspired, India and a number of
other countries started passing “hostage-taking” laws
[[link removed]] to ensure
social-media companies employ locally based staff.
In the United States, we’ve seen these forms of coercion carried out
not by judges and police officers, but by grass-roots organizations,
mobs on social media, cable news talking heads and — in Twitter’s
case — by the company’s new owner.
One of the most recent forces in this campaign is the “Twitter Files
[[link removed]],” a
large assortment of company documents — many of them sent or
received by me during my nearly eight years at Twitter — turned over
at Mr. Musk’s direction to a handful of selected writers. The files
were hyped by Mr. Musk as a groundbreaking form of transparency,
purportedly exposing for the first time the way Twitter’s coastal
liberal bias stifles conservative content.
What they delivered was something else entirely. As tech journalist
Mike Masnick put it
[[link removed]], after all
the fanfare surrounding the initial release of the Twitter Files, in
the end “there was absolutely nothing of interest” in the
documents, and what little there was had significant factual
[[link removed]] errors.
Even Mr. Musk eventually lost patience with the effort
[[link removed]]. But,
in the process, the effort marked a disturbing new escalation in the
harassment of employees of tech firms.
Unlike the documents that would normally emanate from large companies,
the earliest releases of the Twitter Files failed to redact
[[link removed]] the
names of even rank-and-file employees. One Twitter employee based in
the Philippines was doxxed and severely harassed. Others have become
the subjects of conspiracies. Decisions made by teams of dozens in
accordance with Twitter’s written policies were presented as having
been made by the capricious whims of individuals, each pictured and
called out by name. I was, by far, the most frequent target.
The first installment of the Twitter Files
[[link removed]] came
a month after I left the company, and just days after I published a
guest essay
[[link removed]] in
The Times and spoke about my experience
[[link removed]] working for
Mr. Musk. I couldn’t help but feel that the company’s actions
were, on some level, retaliatory. The next week, Mr. Musk went further
by taking a paragraph of my Ph.D. dissertation out of context
to baselessly claim
[[link removed]] that
I condoned pedophilia — a conspiracy trope commonly used by
far-right extremists and QAnon adherents to smear L.G.B.T.Q. people.
The response was even more extreme than I experienced after Mr.
Trump’s tweet about me. “You need to swing from an old oak tree
for the treason you have committed. Live in fear every day,” said
one [[link removed]] of
thousands of threatening tweets and emails. That post, and hundreds of
others like it, were violations of the very policies I’d worked to
develop and enforce. Under new management, Twitter turned a blind eye,
and the posts remain on the site today.
On Dec. 6, four days after the first Twitter Files release, I
was asked to appear
[[link removed]] at
a congressional hearing focused on the files and Twitter’s alleged
censorship. In that hearing
[[link removed]],
members of Congress held up oversize posters of my years-old tweets
and asked me under oath whether I still held those opinions. (To the
extent the carelessly tweeted jokes could be taken as my actual
opinions, I don’t.) Ms. Greene said on Fox News
[[link removed]] that
I had “some very disturbing views about minors and child porn” and
that I “allowed child porn to proliferate on Twitter,” warping Mr.
Musk’s lies even further (and also extending their reach). Inundated
with threats, and with no real options to push back or protect
ourselves, my husband and I had to sell our home and move.
Academia has become the latest target of these campaigns to undermine
online safety efforts. Researchers working
[[link removed]] to understand
and address
[[link removed]] the
spread of online misinformation have increasingly become subjects of
partisan attacks
[[link removed]];
the universities they’re affiliated with have become embroiled in
lawsuits, burdensome public record requests and congressional
proceedings
[[link removed]].
Facing seven-figure legal bills, even some of the largest and
best-funded university labs have said they may have to abandon ship
[[link removed]].
Others targeted have elected to change their research focus based on
the volume of harassment.
Bit by bit, hearing by hearing, these campaigns are systematically
eroding hard-won improvements in the safety and integrity of online
platforms — with the individuals doing this work bearing the most
direct costs.
Tech platforms are retreating
[[link removed]] from
their efforts to protect election security and slow the spread of
online disinformation. Amid a broader climate of belt-tightening,
companies have pulled back especially hard
[[link removed]] on
their trust and safety efforts. As they face mounting pressure from a
hostile Congress, these choices are as rational as they are dangerous.
We can look abroad to see how this story might end. Where once
companies would at least make an effort to resist outside pressure,
they now largely capitulate by default. In early 2023, the Indian
government asked Twitter to restrict posts critical of Prime Minister
Narendra Modi. In years past, the company had pushed back on such
requests
[[link removed]];
this time, Twitter acquiesced
[[link removed]].
When a journalist noted that such cooperation only incentivizes
further proliferation of draconian measures, Mr. Musk shrugged
[[link removed]]:
“If we have a choice of either our people go to prison or we comply
with the laws, we will comply with the laws.”
It’s hard to fault Mr. Musk for his decision not to put Twitter’s
employees in India in harm’s way. But we shouldn’t forget where
these tactics came from or how they became so widespread. From pushing
the Twitter Files to tweeting baseless conspiracies about former
employees, Mr. Musk’s actions have normalized and popularized
vigilante accountability, and made ordinary employees of his company
into even greater targets. His recent targeting of the
Anti-Defamation League
[[link removed]] has
shown that he views personal retaliation as an appropriate consequence
for any criticism of him or his business interests. And, as a
practical matter, with hate speech on the rise
[[link removed]] and advertiser
revenue in retreat
[[link removed]],
Mr. Musk’s efforts seem to have done little to improve Twitter’s
bottom line.
What can be done to turn back this tide?
Making the coercive influences on platform decision making clearer is
a critical first step. And regulation that requires companies to be
transparent about the choices they make in these cases, and why they
make them, could help.
In its absence, companies must push back against attempts to control
their work. Some of these decisions are fundamental matters of
long-term business strategy, like where to open (or not open)
corporate offices. But companies have a duty to their staff, too:
Employees shouldn’t be left to figure out how to protect themselves
after their lives have already been upended by these campaigns.
Offering access to privacy-promoting services can help. Many
institutions would do well to learn the lesson that few spheres of
public life are immune to influence through intimidation.
If social media companies cannot safely operate in a country without
exposing their staff to personal risk and company decisions to undue
influence, perhaps they should not operate there at all. Like others
[[link removed]],
I worry that such pullouts would worsen the options left to people who
have the greatest need for free and open online expression. But
remaining in a compromised way could forestall necessary reckoning
with censorial government policies. Refusing to comply with morally
unjustifiable demands, and facing blockages as a result, may in the
long run provoke the necessary public outrage that can help drive
reform.
The broader challenge here — and perhaps, the inescapable one — is
the essential humanness of online trust and safety efforts. It isn’t
machine learning models and faceless algorithms behind key content
moderation decisions: it’s people. And people can be pressured,
intimidated, threatened and extorted. Standing up to injustice,
authoritarianism and online harms requires employees who are willing
to do that work.
Few people could be expected to take a job doing so if the cost is
their life or liberty. We all need to recognize this new reality, and
to plan accordingly.
_[YOEL ROTH is a visiting scholar at the University of Pennsylvania
and the Carnegie Endowment for International Peace, and the former
head of trust and safety at Twitter.]_
* social media
[[link removed]]
* twitter
[[link removed]]
* Google
[[link removed]]
* Donald Trump
[[link removed]]
* Elon Musk
[[link removed]]
* 2024 Elections
[[link removed]]
* conspiracy theories
[[link removed]]
* Internet spying
[[link removed]]
* Digital spying
[[link removed]]
* worker harassment
[[link removed]]
* coercion
[[link removed]]
* election manipulation
[[link removed]]
*
[[link removed].]
*
[[link removed]]
*
*
[[link removed]]
INTERPRET THE WORLD AND CHANGE IT
Submit via web
[[link removed]]
Submit via email
Frequently asked questions
[[link removed]]
Manage subscription
[[link removed]]
Visit xxxxxx.org
[[link removed]]
Twitter [[link removed]]
Facebook [[link removed]]
[link removed]
To unsubscribe, click the following link:
[link removed]