Tech &
Terrorism: Following Public Outcry And Government Criticism, Facebook
Makes Another Reactive Policy Change
Company’s Latest Policy
Still Inadequate For Tackling Misinformation
Last week, Facebook announced
a new policy change in their aim to eliminate deepfakes and the spread
of manipulated media ahead of the 2020 presidential election. In their
updated Manipulated Media section of their Community
Standards, Facebook said they would remove misleading
manipulated media if “that has been edited or synthesized, beyond
adjustments for clarity or quality, in ways that are not apparent to
an average person, and would likely mislead an average person to
believe that a subject of the video said words that they did not say
AND is the product of artificial intelligence or machine learning,
including deep learning techniques (e.g., a technical deepfake), that
merges, combines, replaces, and/or superimposes content onto a video,
creating a video that appears authentic.”
The announcement represents yet another in a series of
reactive policy changes that are made only after a public relations
crisis or government pressure. Facebook’s latest efforts come after
months of criticism for failing
to adequately address a manipulated video of House Speaker Nancy
Pelosi, and ahead of a company representative’s testimony before the
House Committee on Energy and Commerce. Moreover, Facebook’s new
deepfakes policy fails to cover deliberately deceptive videos created
using low-tech means that could just as easily promote misinformation
and sow discord as high-tech AI-edited videos.
Dr. Hany Farid, a digital forensics expert at the University
of California, Berkeley and senior advisor to the Counter Extremism
Project (CEP), criticized the company’s latest policy and described it
as “narrowly construed.” Dr. Farid said
to The Washington Post, “These misleading videos were created
using low-tech methods and did not rely on AI-based techniques, but
were at least as misleading as a deep-fake video of a leader
purporting to say something that they didn’t. Why focus only on
deep-fakes and not the broader issue of intentionally misleading
videos?”
For over a decade, Facebook has faced criticism for the
misuse of its platform on issues ranging from the publication of
inappropriate content to user privacy and safety. Rather than taking
preventative measures, Facebook has too often jumped to make policy
changes after damage has already been done. CEP has documented
instances in which Facebook has made express policy changes following
public accusations, a scandal, or pressure from lawmakers. While one
would hope that Facebook is continuously working to improve security
on its platform, there is no excuse as to why so many policy changes
have been reactive, and it raises the question as to what other
scandals are in the making due to still-undiscovered lapses in
Facebook’s current policy.
To read the CEP report Tracking Facebook’s Policy
Changes, please click here.
###
|