FOR IMMEDIATE RELEASE
A statement from Adrian Shahbaz, Research Director for Technology and Democracy
- Social media and the internet have opened up new ways for hostile powers to directly abuse and influence individuals in democratic societies.
- False news, information operations, and online propaganda pose significant but distinct threats to the functioning of a democracy.
- Responses to these threats have the potential to infringe on fundamental rights, such as freedom of expression, access to information, privacy, and press freedom.
- Any response must be carefully evaluated to ensure that it is strictly necessary to achieve a legitimate aim (protecting democracy) and carried out in a manner that limits unintended consequences and collateral damage. The cure should not be more harmful than the disease.
Our recommended response:
- Remove content that is deliberately and unequivocally false under policies designed to combat spam or unauthorized use of the platform.
- Label or eliminate automated “bot” accounts. Recognizing that bots can be used for both helpful and harmful purposes, and acknowledging their role in spreading disinformation, companies should strive to provide clear labeling for suspected bot accounts. Those that remain harmful even if labeled should be eliminated from the platform.
- Do not remove state-owned media that function as propaganda outlets for hostile powers, unless they violate the platform’s terms of service through actions such as those cited above.
- Most articles published by the state-owned propaganda outlets of hostile powers would be difficult to classify as deliberately and unequivocally “false.”
- From our current perspective, banning these outlets would constitute a disproportionate response to the problem and could harm press freedom.
- We recommend tackling the issue through a more effective, transparent, and uniform application of platforms’ existing policies. Options to consider include down-ranking posts made by these outlets in news feeds, combating the artificial amplification of posts through the use of bots and fake accounts, and restricting the outlets’ ability to buy advertising on the platforms.
- We also encourage technology companies to prioritize well-established, credible, and local news sites over state-owned outlets from countries that do not receive a “Free” rating in Freedom House’s Freedom in the World report, such as Russia, China, Turkey, Iran, Saudi Arabia, and the United Arab Emirates.
- Ensure fair and transparent content moderation practices. In order to fairly and transparently moderate public posts on their platforms and services, private companies should do the following:
- Clearly and concretely define what speech is not permissible in their guidelines and terms of service.
- If certain speech needs to be curbed, consider less invasive actions before restricting it outright, for example warning users that they are violating terms of service and adjusting algorithms that might unintentionally promote disinformation or incitement to violence.
- Ensure that content removal requests from governments are in compliance with international human rights standards.
- Publish detailed transparency reports on content takedowns—both for those initiated by governments and for those undertaken by the companies themselves.
- Provide an efficient avenue for appeal for users who believe that their speech was unduly restricted.
- Engage in continuous dialogue with local civil society organizations.
- Companies should seek out local expertise on the political and cultural context in markets where they have a presence or where their products are widely used. These consultations with civil society groups should inform the companies’ approach to content moderation, government requests, and countering disinformation, among other things.
- We appreciate companies’ understanding that tackling the issue of disinformation and false news requires working with media companies and subject-matter experts. Efforts like the Facebook Journalism Project and the News Integrity Initiative provide crucial support for improving individuals’ digital media literacy. That endeavor will take time, but we believe that education is ultimately better than censorship as a tactic for dealing with disinformation, false news, and propaganda.
Background:
Twitter recently announced that it would no longer accept advertising from state-owned news sources. The statement above lays out additional steps that technology platforms should take.
|