From xxxxxx <[email protected]>
Subject How Disinformation Could Sway the 2020 Election
Date September 12, 2019 1:57 AM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
[ While foreign election interference has dominated discussion of
disinformation, most intentionally false content targeting U.S. social
media is generated by domestic sources.] [[link removed]]

HOW DISINFORMATION COULD SWAY THE 2020 ELECTION  
[[link removed]]

 

Paul M. Barrett
September 9, 2019
The Conversation
[[link removed]]


*
[[link removed]]
*
[[link removed]]
*
* [[link removed]]

_ While foreign election interference has dominated discussion of
disinformation, most intentionally false content targeting U.S. social
media is generated by domestic sources. _

,

 

In 2016, Russian operatives used Facebook, Twitter and YouTube
[[link removed]] to sow
division among American voters
[[link removed]] and
boost Donald Trump’s presidential campaign.

What the Russians used to accomplish this is called
“disinformation,” which is false or misleading content intended to
deceive or promote discord. Now, with the first presidential primary
vote only five months away, the public should be aware of the sources
and types of online disinformation likely to surface during the 2020
election.

First, the Russians will be back. Don’t be reassured
[[link removed]] by
the notorious Russian Internet Research Agency’s relatively
negligible presence during last year’s midterm elections. The agency
might have been keeping its powder dry in anticipation of the 2020
presidential race. And it helped that U.S. Cyber Command, an arm of
the military, reportedly blocked
[[link removed]] the
agency’s internet access for a few days right before the election in
November 2018.

Temporarily shutting down the Internet Research Agency won’t be
enough to stop the flow of harmful content. Lee Foster, who leads the
disinformation team at the cybersecurity firm FireEye, told me in an
interview that the agency is “a small component of the overall
Russian operation,” which also includes Moscow’s military
intelligence service and possibly other organizations. Over time,
Foster said, “All of these actors rework their approaches and
tactics.”

And there’s more to fear than just the Russians. I’m the author of
a new report
[[link removed]] on
disinformation and the 2020 election published by the New York
University Stern Center for Business and Human Rights. In the report,
I predict that the Russians won’t be alone in spreading
disinformation in 2020. Their most likely imitator will be Iran,
especially if hostility between Tehran and Washington continues to
mount.

Disinformation isn’t just Russian

In May, acting on a tip from FireEye, Facebook took down nearly 100
Iranian-related accounts
[[link removed]],
pages and groups. The Iranian network had used fake American
identities to espouse both conservative and liberal political views
[[link removed]],
while also promoting extremely divisive anti-Saudi, anti-Israel and
pro-Palestinian themes.

As Senate Intelligence Committee co-chair Mark Warner, a Virginia
Democrat, has said, “The Iranians are now following the Kremlin’s
playbook
[[link removed]].”

[[link removed]]

Mark Warner [[link removed]]✔@MarkWarner
[[link removed]]

 · Aug 21, 2018
[[link removed]]

Replying to @MarkWarner
[[link removed]]

I’ve been saying for months that there’s no way the problem of
social media manipulation is limited to a single troll farm in St.
Petersburg, and that fact is now beyond a doubt.

[[link removed]]

Mark Warner [[link removed]]✔@MarkWarner
[[link removed]]

We also learned today that the Iranians are now following the
Kremlin’s playbook from 2016.

7:36 PM - Aug 21, 2018
[[link removed]]

While foreign election interference has dominated discussion of
disinformation, most intentionally false content targeting U.S. social
media is generated by domestic sources
[[link removed]].

I believe that will continue to be the case in 2020. President Trump
often uses Twitter to circulate conspiracy theories and cast his foes
as corrupt
[[link removed]]. One
story line he pushes is that Facebook, Twitter and Google are
colluding with Democrats to undermine him
[[link removed]].
Introducing a right-wing “social media summit”
[[link removed]] at
the White House in July, he tweeted about the “tremendous
dishonesty, bias, discrimination
[[link removed]],
and suppression practiced by certain companies.”

Supporters of Democrats also have trafficked in disinformation. In
December 2017, a group of liberal activists created fake Facebook
pages
[[link removed]] designed
to mislead conservative voters
[[link removed]] in
a special U.S. Senate race in Alabama. Matt Osborne, who
has acknowledged being involved
[[link removed]] in
the Alabama scheme, told me that in 2020, “you’re going to see a
movement toward [political spending from undisclosed sources] on
digital campaigns in the closing days of the race.” He suggests
there could be an effort to discourage Republicans from voting with
“an image of a red wave with a triumphal statement that imbues them
with a sense of inevitable victory: ‘No need to bother voting. Trump
has got it in the bag.’”

Spreading fake videos

Also likely to surface next year: “deepfake
[[link removed]]”
videos. This technique produces highly convincing – but false –
images and audio. In a recent letter to the CEOs
[[link removed]] of
Facebook, Google and Twitter, House Intelligence Committee Chairman
Adam Schiff, a California Democrat, wrote: “A timely, convincing
deepfake video
[[link removed]] of
a candidate” that goes viral on a platform “could hijack a race
– and even alter the course of history. … The consequences for our
democracy could be devastating.”

Just one example of a deepfake video.

Instagram could be a vehicle for deepfakes. Owned by Facebook, the
photo and video platform played a much bigger role in Russia’s
manipulation
[[link removed]] of
the 2016 U.S. election than most people realize, and it could be
exploited again in 2020. The Russian Internet Research Agency enjoyed
more user engagement on Instagram than it did on any other platform,
according to a December 2018 report
[[link removed]] commissioned
by the Senate Intelligence Committee. “Instagram is likely to be a
key battleground on an ongoing basis,” the report added.

Companies could step up

The social media companies are responding to the problem of
disinformation
[[link removed]] by
improving their artificial intelligence filters and hiring thousands
of additional employees devoted to safety and security. “The
companies are getting much better at detection and removal of fake
accounts,” Dipayan Ghosh [[link removed]], co-director
of the Harvard Kennedy School’s Platform Accountability Project,
told me.

But the companies do not completely remove much of the content they
pinpoint as false; they merely reduce how often it appears for users
[[link removed]], and
sometimes post a message noting that it’s false.

In my view, provably false material should be eliminated from feeds
and recommendations, with a copy retained in a cordoned-off archive
available for research purposes to scholars, journalists and others.

Another problem is that responsibility for content decisions now tends
to be scattered among different teams within each of the social media
companies. Our report recommends that to streamline and
centralize, each company should hire a senior official
[[link removed]] who
reports to the CEO and is responsible for overseeing the fight against
disinformation. Such executives could marshal resources more easily
within each company and more effectively coordinate efforts across
social media companies.

Finally, the platforms could also cooperate more than they currently
do to stamp out disinformation. They’ve collaborated effectively
to root out child pornography
[[link removed]] and terrorist
incitement
[[link removed]].
I believe they now have a collective responsibility to rid the coming
election of as much disinformation as possible. An electorate that has
been fed lies about candidates and issues can’t make informed
decisions. Votes will be based on falsehoods. And that means the
future of American democracy – in 2020 and beyond – depends on
dealing effectively with disinformation.

In 2016, Russian operatives used Facebook, Twitter and YouTube
[[link removed]] to sow
division among American voters
[[link removed]] and
boost Donald Trump’s presidential campaign.

What the Russians used to accomplish this is called
“disinformation,” which is false or misleading content intended to
deceive or promote discord. Now, with the first presidential primary
vote only five months away, the public should be aware of the sources
and types of online disinformation likely to surface during the 2020
election.

First, the Russians will be back. Don’t be reassured
[[link removed]] by
the notorious Russian Internet Research Agency’s relatively
negligible presence during last year’s midterm elections. The agency
might have been keeping its powder dry in anticipation of the 2020
presidential race. And it helped that U.S. Cyber Command, an arm of
the military, reportedly blocked
[[link removed]] the
agency’s internet access for a few days right before the election in
November 2018.

Temporarily shutting down the Internet Research Agency won’t be
enough to stop the flow of harmful content. Lee Foster, who leads the
disinformation team at the cybersecurity firm FireEye, told me in an
interview that the agency is “a small component of the overall
Russian operation,” which also includes Moscow’s military
intelligence service and possibly other organizations. Over time,
Foster said, “All of these actors rework their approaches and
tactics.”

And there’s more to fear than just the Russians. I’m the author of
a new report
[[link removed]] on
disinformation and the 2020 election published by the New York
University Stern Center for Business and Human Rights. In the report,
I predict that the Russians won’t be alone in spreading
disinformation in 2020. Their most likely imitator will be Iran,
especially if hostility between Tehran and Washington continues to
mount.

Disinformation isn’t just Russian

In May, acting on a tip from FireEye, Facebook took down nearly 100
Iranian-related accounts
[[link removed]],
pages and groups. The Iranian network had used fake American
identities to espouse both conservative and liberal political views
[[link removed]],
while also promoting extremely divisive anti-Saudi, anti-Israel and
pro-Palestinian themes.

As Senate Intelligence Committee co-chair Mark Warner, a Virginia
Democrat, has said, “The Iranians are now following the Kremlin’s
playbook
[[link removed]].”

[[link removed]]

Mark Warner [[link removed]]✔@MarkWarner
[[link removed]]

 · Aug 21, 2018
[[link removed]]

Replying to @MarkWarner
[[link removed]]

I’ve been saying for months that there’s no way the problem of
social media manipulation is limited to a single troll farm in St.
Petersburg, and that fact is now beyond a doubt.

[[link removed]]

Mark Warner [[link removed]]✔@MarkWarner
[[link removed]]

We also learned today that the Iranians are now following the
Kremlin’s playbook from 2016.

272 [[link removed]]

7:36 PM - Aug 21, 2018
[[link removed]]

Twitter Ads info and privacy
[[link removed]]

120 people are talking about this
[[link removed]]

While foreign election interference has dominated discussion of
disinformation, most intentionally false content targeting U.S. social
media is generated by domestic sources
[[link removed]].

I believe that will continue to be the case in 2020. President Trump
often uses Twitter to circulate conspiracy theories and cast his foes
as corrupt
[[link removed]]. One
story line he pushes is that Facebook, Twitter and Google are
colluding with Democrats to undermine him
[[link removed]].
Introducing a right-wing “social media summit”
[[link removed]] at
the White House in July, he tweeted about the “tremendous
dishonesty, bias, discrimination
[[link removed]],
and suppression practiced by certain companies.”

Supporters of Democrats also have trafficked in disinformation. In
December 2017, a group of liberal activists created fake Facebook
pages
[[link removed]] designed
to mislead conservative voters
[[link removed]] in
a special U.S. Senate race in Alabama. Matt Osborne, who
has acknowledged being involved
[[link removed]] in
the Alabama scheme, told me that in 2020, “you’re going to see a
movement toward [political spending from undisclosed sources] on
digital campaigns in the closing days of the race.” He suggests
there could be an effort to discourage Republicans from voting with
“an image of a red wave with a triumphal statement that imbues them
with a sense of inevitable victory: ‘No need to bother voting. Trump
has got it in the bag.’”

Spreading fake videos

Also likely to surface next year: “deepfake
[[link removed]]”
videos. This technique produces highly convincing – but false –
images and audio. In a recent letter to the CEOs
[[link removed]] of
Facebook, Google and Twitter, House Intelligence Committee Chairman
Adam Schiff, a California Democrat, wrote: “A timely, convincing
deepfake video
[[link removed]] of
a candidate” that goes viral on a platform “could hijack a race
– and even alter the course of history. … The consequences for our
democracy could be devastating.”

Just one example of a deepfake video.

Instagram could be a vehicle for deepfakes. Owned by Facebook, the
photo and video platform played a much bigger role in Russia’s
manipulation
[[link removed]] of
the 2016 U.S. election than most people realize, and it could be
exploited again in 2020. The Russian Internet Research Agency enjoyed
more user engagement on Instagram than it did on any other platform,
according to a December 2018 report
[[link removed]] commissioned
by the Senate Intelligence Committee. “Instagram is likely to be a
key battleground on an ongoing basis,” the report added.

Companies could step up

The social media companies are responding to the problem of
disinformation
[[link removed]] by
improving their artificial intelligence filters and hiring thousands
of additional employees devoted to safety and security. “The
companies are getting much better at detection and removal of fake
accounts,” Dipayan Ghosh [[link removed]], co-director
of the Harvard Kennedy School’s Platform Accountability Project,
told me.

But the companies do not completely remove much of the content they
pinpoint as false; they merely reduce how often it appears for users
[[link removed]], and
sometimes post a message noting that it’s false.

In my view, provably false material should be eliminated from feeds
and recommendations, with a copy retained in a cordoned-off archive
available for research purposes to scholars, journalists and others.

Another problem is that responsibility for content decisions now tends
to be scattered among different teams within each of the social media
companies. Our report recommends that to streamline and
centralize, each company should hire a senior official
[[link removed]] who
reports to the CEO and is responsible for overseeing the fight against
disinformation. Such executives could marshal resources more easily
within each company and more effectively coordinate efforts across
social media companies.

Finally, the platforms could also cooperate more than they currently
do to stamp out disinformation. They’ve collaborated effectively
to root out child pornography
[[link removed]] and terrorist
incitement
[[link removed]].
I believe they now have a collective responsibility to rid the coming
election of as much disinformation as possible. An electorate that has
been fed lies about candidates and issues can’t make informed
decisions. Votes will be based on falsehoods. And that means the
future of American democracy – in 2020 and beyond – depends on
dealing effectively with disinformation.

In 2016, Russian operatives used Facebook, Twitter and YouTube
[[link removed]] to sow
division among American voters
[[link removed]] and
boost Donald Trump’s presidential campaign.

What the Russians used to accomplish this is called
“disinformation,” which is false or misleading content intended to
deceive or promote discord. Now, with the first presidential primary
vote only five months away, the public should be aware of the sources
and types of online disinformation likely to surface during the 2020
election.

First, the Russians will be back. Don’t be reassured
[[link removed]] by
the notorious Russian Internet Research Agency’s relatively
negligible presence during last year’s midterm elections. The agency
might have been keeping its powder dry in anticipation of the 2020
presidential race. And it helped that U.S. Cyber Command, an arm of
the military, reportedly blocked
[[link removed]] the
agency’s internet access for a few days right before the election in
November 2018.

Temporarily shutting down the Internet Research Agency won’t be
enough to stop the flow of harmful content. Lee Foster, who leads the
disinformation team at the cybersecurity firm FireEye, told me in an
interview that the agency is “a small component of the overall
Russian operation,” which also includes Moscow’s military
intelligence service and possibly other organizations. Over time,
Foster said, “All of these actors rework their approaches and
tactics.”

And there’s more to fear than just the Russians. I’m the author of
a new report
[[link removed]] on
disinformation and the 2020 election published by the New York
University Stern Center for Business and Human Rights. In the report,
I predict that the Russians won’t be alone in spreading
disinformation in 2020. Their most likely imitator will be Iran,
especially if hostility between Tehran and Washington continues to
mount.

Disinformation isn’t just Russian

In May, acting on a tip from FireEye, Facebook took down nearly 100
Iranian-related accounts
[[link removed]],
pages and groups. The Iranian network had used fake American
identities to espouse both conservative and liberal political views
[[link removed]],
while also promoting extremely divisive anti-Saudi, anti-Israel and
pro-Palestinian themes.

As Senate Intelligence Committee co-chair Mark Warner, a Virginia
Democrat, has said, “The Iranians are now following the Kremlin’s
playbook
[[link removed]].”

[[link removed]]

Mark Warner [[link removed]]✔@MarkWarner
[[link removed]]

 · Aug 21, 2018
[[link removed]]

Replying to @MarkWarner
[[link removed]]

I’ve been saying for months that there’s no way the problem of
social media manipulation is limited to a single troll farm in St.
Petersburg, and that fact is now beyond a doubt.

[[link removed]]

Mark Warner [[link removed]]✔@MarkWarner
[[link removed]]

We also learned today that the Iranians are now following the
Kremlin’s playbook from 2016.

272 [[link removed]]

7:36 PM - Aug 21, 2018
[[link removed]]

Twitter Ads info and privacy
[[link removed]]

120 people are talking about this
[[link removed]]

While foreign election interference has dominated discussion of
disinformation, most intentionally false content targeting U.S. social
media is generated by domestic sources
[[link removed]].

I believe that will continue to be the case in 2020. President Trump
often uses Twitter to circulate conspiracy theories and cast his foes
as corrupt
[[link removed]]. One
story line he pushes is that Facebook, Twitter and Google are
colluding with Democrats to undermine him
[[link removed]].
Introducing a right-wing “social media summit”
[[link removed]] at
the White House in July, he tweeted about the “tremendous
dishonesty, bias, discrimination
[[link removed]],
and suppression practiced by certain companies.”

Supporters of Democrats also have trafficked in disinformation. In
December 2017, a group of liberal activists created fake Facebook
pages
[[link removed]] designed
to mislead conservative voters
[[link removed]] in
a special U.S. Senate race in Alabama. Matt Osborne, who
has acknowledged being involved
[[link removed]] in
the Alabama scheme, told me that in 2020, “you’re going to see a
movement toward [political spending from undisclosed sources] on
digital campaigns in the closing days of the race.” He suggests
there could be an effort to discourage Republicans from voting with
“an image of a red wave with a triumphal statement that imbues them
with a sense of inevitable victory: ‘No need to bother voting. Trump
has got it in the bag.’”

Spreading fake videos

Also likely to surface next year: “deepfake
[[link removed]]”
videos. This technique produces highly convincing – but false –
images and audio. In a recent letter to the CEOs
[[link removed]] of
Facebook, Google and Twitter, House Intelligence Committee Chairman
Adam Schiff, a California Democrat, wrote: “A timely, convincing
deepfake video
[[link removed]] of
a candidate” that goes viral on a platform “could hijack a race
– and even alter the course of history. … The consequences for our
democracy could be devastating.”

Just one example of a deepfake video.

Instagram could be a vehicle for deepfakes. Owned by Facebook, the
photo and video platform played a much bigger role in Russia’s
manipulation
[[link removed]] of
the 2016 U.S. election than most people realize, and it could be
exploited again in 2020. The Russian Internet Research Agency enjoyed
more user engagement on Instagram than it did on any other platform,
according to a December 2018 report
[[link removed]] commissioned
by the Senate Intelligence Committee. “Instagram is likely to be a
key battleground on an ongoing basis,” the report added.

Companies could step up

The social media companies are responding to the problem of
disinformation
[[link removed]] by
improving their artificial intelligence filters and hiring thousands
of additional employees devoted to safety and security. “The
companies are getting much better at detection and removal of fake
accounts,” Dipayan Ghosh [[link removed]], co-director
of the Harvard Kennedy School’s Platform Accountability Project,
told me.

But the companies do not completely remove much of the content they
pinpoint as false; they merely reduce how often it appears for users
[[link removed]], and
sometimes post a message noting that it’s false.

In my view, provably false material should be eliminated from feeds
and recommendations, with a copy retained in a cordoned-off archive
available for research purposes to scholars, journalists and others.

Another problem is that responsibility for content decisions now tends
to be scattered among different teams within each of the social media
companies. Our report recommends that to streamline and
centralize, each company should hire a senior official
[[link removed]] who
reports to the CEO and is responsible for overseeing the fight against
disinformation. Such executives could marshal resources more easily
within each company and more effectively coordinate efforts across
social media companies.

Finally, the platforms could also cooperate more than they currently
do to stamp out disinformation. They’ve collaborated effectively
to root out child pornography
[[link removed]] and terrorist
incitement
[[link removed]].
I believe they now have a collective responsibility to rid the coming
election of as much disinformation as possible. An electorate that has
been fed lies about candidates and issues can’t make informed
decisions. Votes will be based on falsehoods. And that means the
future of American democracy – in 2020 and beyond – depends on
dealing effectively with disinformation.

_Paul M. Barrett is Deputy Director, Center for Business and Human
Rights, Stern School of Business; Adjunct Professor of Law, New York
University_

*
[[link removed]]
*
[[link removed]]
*
* [[link removed]]

 

 

 

INTERPRET THE WORLD AND CHANGE IT

 

 

Submit via web [[link removed]]
Submit via email
Frequently asked questions [[link removed]]
Manage subscription [[link removed]]
Visit xxxxxx.org [[link removed]]

Twitter [[link removed]]

Facebook [[link removed]]

 




[link removed]

To unsubscribe, click the following link:
[link removed]
Screenshot of the email generated on import

Message Analysis

  • Sender: Portside
  • Political Party: n/a
  • Country: United States
  • State/Locality: n/a
  • Office: n/a
  • Email Providers:
    • L-Soft LISTSERV